February 24 2025 •  Episode 025

Bjarn Brunenberg - TomTom - A Strategic Experimentation Framework

“ The strategic experimentation framework evolved due to the widening gap between what management wanted to achieve, and what teams were doing on the ground. It came through a need to better connect organisational goals, KPI’s and ambitions to the work being conducted by experimentation teams. After all, it’s the job of experiments to help business leaders make better decisions. ”


Bjarn Brunenberg is the Experimentation Lead at TomTom. TomTom is a Dutch multinational developer and creator of location technology and consumer electronics, operating in 29 countries and with revenues of €585M.

At TomTom, he is responsible for accelerating growth in the eCommerce team through Conversion Rate Optimisation, Experimentation and strategic GTM plans. He has grown App revenue at TomTom by 100%+ YoY. Key to this growth was developing and implementing a systemic experimentation process, which improved experiment throughput from 20 experiments to over 200 experiments.

Prior to this, Bjarn worked as a Growth Marketer at Mollie, Netherlands and also as a Digital Marketing Specialist at Accenture, supporting Google projects.

He is a keynote speaker and two-time winner of the Experimentation Culture awards, also creating the Strategic Experimentation Framework, which helps organisations link experimentation to their OKRs and big-picture goals.

 

 

Get the transcript

Episode 025 - Bjarn Brunenberg - TomTom - A Strategic Experimentation Framework

 

Gavin Bryant 00:03

Hello and welcome to the Experimentation Masters Podcast. Today, I would like to welcome Bjarn Brunenberg to the show. Bjarn is the experimentation lead at TomTom. At TomTom, he is responsible for accelerating growth in the E commerce team through conversion rate optimization, experimentation and strategic go to market plans. Prior to this, he has worked as a growth marketer at Molly in the Netherlands, and also as a digital marketing specialist at Accenture. Welcome to the show, Bjarn.

 

Bjarn Brunenberg 00:37

Thank you, Gavin, it's really nice to be here. Really excited to our talk.

 

Gavin Bryant 00:42

Great to have you here today, and I'm really looking forward to our conversation today around strategic experimentation framework. This is an area that I'm very passionate about, and is something that's close to my heart, and also from my work, I observe a really big opportunity for teams. So I hope that our audience and listeners take a lot away from our conversation today. One of the things I wanted to just get started with beyond 2024, it's been an amazing year for you. You've presented a number of industry conferences, you've received industry awards. You've even had a wedding thrown in there as well. How is 2024 been?

 

Bjarn Brunenberg 01:26

Yeah, it's been an amazing year so far. Like you said, a lot of things happened. I'm very grateful for all the things I had to do. At the company, we're driving an experimentation program. It's grown great. And because of that, because all the great things we do there, also had the chance to talk on stage. Indeed, that's just something I really love Gavin, I love the be stage [phonetic 01:51] and talking about it, and something I got really fashioned for as well. And yeah, indeed, we talked quickly about the wedding. Indeed, yeah, so I got married this year. Absolutely great. I lived in Portugal, in Porto, where I currently live. And my wife is Brazilian, actually. So it was really interesting to see that my Brazilian family came over to fly into Porto, my Dutch family came over to fly in, and we had two weeks of fun here. So that was also amazing. 

 

Gavin Bryant 02:25

Yeah, and it must have been nice to have some recognition from your peers and folks in industry about the great work you've been doing at TomTom. Was that really satisfying for you?

 

Bjarn Brunenberg 02:39

Yeah, definitely. So I think the great thing is about going to events or being on stage, but the best thing about events is like meeting a lot of different people, and also people you've only maybe seen on LinkedIn, but then you meet them in real life. And I think experimentation and CRO community is such an amazing community, so many great mind people over there, and I'm always astonished how open everyone is to connect, share ideas, share learnings. I made a lot of friends already, and it just keeps on growing. So I love the community, and don't think I want to leave them. No.

 

Gavin Bryant 03:21

You're here to stay for some time. Yes, I would like to... Let's talk a bit about your journey in a little more detail. Then could you give the audience some context on your background and journey to date, please?

 

Bjarn Brunenberg 03:37

Yeah, sure. It's actually what I see most of the times with people in experimentation or CRO, it's never a clear line towards how they got that spot. Also not for me. So as you mentioned in the introduction, I started, actually in Dublin. My journey started in Dublin with Accenture, working as a technical support for Google ads and Google Analytics. So, we're working with bigger brands like H&M or Disney or Lamborghini even uncommon to set up their accounts for Google Analytics, but mostly technical support for Google Analytics. It always surprised me how much people spent on Google ads. I was always blown away by that, and that's really also where my passion for data came about there. But I've quickly figured out I was still there in the digital marketing bubble, and I want something else. I want something more. And then I was looking for it, and I stumbled on something called growth drive, and it was an academy in Netherlands. They said it was one of the first academies. They're actually getting people into growth marketing, learning that craft, and this was the first time I heard me talking six years ago. Was first time I heard about growth marketing. And I figured out to say, this is more than just digital marketing, most says user psychology and data and combining all that stuff. So make the leap. I went from done to Amsterdam. Did the academy there for six months. And during that Academy, I was working at the same time at Molly in Amsterdam. Molly payments is-- Yeah, big payment provider in Netherlands, kind of similar to stripe. And yeah from there, I started as a growth marketeer. I developed my craft there. I really loved it. And then eventually I went to TomTom. And what most people ask me, if I say that I work at TomTom, the first question I get, Gavin, maybe you have it as well, is, do they still exist? Because most of people, they know from those navigation devices starting to windshield. Did you have by any chance one Gavin, or no?

 

Gavin Bryant 06:09

It was funny though. I was chatting to my wife last night, she was asking me a little bit about our conversation today, and I mentioned that you worked at TomTom, and she said the exact same thing, are they still around?

 

Bjarn Brunenberg 06:25

Exactly, that's the one single most question I get. And then the second one is that I say, yes, they do. And then the second question there, like, so what did it do now? So when I was applying, I have to be honest, I was a little bit skeptical in beginning as well. Like they they were glory in 2004 right or even earlier. But now, what they do now? But I figured out they're very digital, visual minded and very data driven minded as well. And basically everything has to do with maps. Think of autonomous driving, less mile delivery feature of mobility, basically anything that has to do with maps, TomTom is involved. And when I was applying, I was applying for the consumer side of business, and they were actually developing a TomTom navigation app. So basically getting all that software that they had, putting move, shifting from hardware to software, and putting it all in an app. And again, I was surprised how they already thinking about experimentation, how data mined they were. The whole team setup was already lean. They were already working in experimentation. And so it was a really warm path, basically, to step into. And since then, it has been a very great journey. I've been now, in total, for four years. And since I joined, I think we did around 50 experimentations, 50 experiments per year, and we grew two now to 250, 300 per year. Like you mentioned, we won experimentation culture awards, and I just got so much passion of driving that experimentation culture forward in the whole company. So yeah, it's been a great journey. 

 

Gavin Bryant 08:15

One of the things that I wanted to ask you, just to build on that fantastic introduction, so over that journey, what are some of the things that you know about experimentation now that you wish you knew earlier?

 

Bjarn Brunenberg 08:29

Oh, yeah, that's a very good question. And I have, I think I have a couple, but the first thing that comes to my mind is, don't be afraid to fail. And even more. So I would say, fail as much as possible, actually. So I would tell my younger self four years ago, please go out there and fail as much as possible. Why is 80% of your tests will fail anyway. And that's totally okay. It's actually necessary, because if you fail a lot and fail fast. I think it's part of, really the growth mindset. I think we need to be humble. We need to be okay, of not knowing, and say, You don't know it. And say, Okay, I don't know it. Let's test it and fail. And also why I think fail-- failing that has to be-- it has to do with being brave as well. I think in the beginning, I was really worried that, oh, I cannot have a fail, because fail has this, all this negative sentence around it, which, by the way, I think the term fail is already and we should change that, to change the perception. But, yeah, if you have big fails, Gavin, it tells me also that there is something there. I rather have big fails than small, incremental wins and what I mean by that is that, if I have a bit feel I know there's something there, like, I mean that there's something the lever I can pull is there. There's a big metric I can use there. So, yeah, big advice to myself, don't be afraid to fail, and also don't do small, incremental changes, rather also go for the big bangs. I think I've wasted so much time on just doing small stuff and hoping it gets a win, but I learned over the years that I rather do one bigger change, and I hope, or of course, I hope it is a big win, but if it's a big field, it's the same effect. It's great, because what I get out of it is the learning. And then if it's a big feel, I exactly know what not to do. And I think I even get more learnings from that, from failures than from wins. So yeah, that has been two lessons, and I can add one more thing is that I rather have now. Let me put it differently, I also-- I talked about liberal with legal support. So I think it's also crucial, and I wish I also did this earlier. It's crucial to understand that which levers to pull. So for example, if I do an AB test on the PDP and I get 20% increase, Wow, great. But doesn't mean anything if that is not aligned to higher level business goals, or if it is really moving the business metric forward, right? So maybe management want to see something completely different, and that's all those three together. Yeah, it's something fluent over years.

 

Gavin Bryant 11:53

Let's just talk about that one a little bit more that big failures can result in big learnings, and those learnings can be more helpful to understand what not to do rather than what to do. Do you have an example that you've been able to provide where there was a really key learning that emerged from something that didn't work?

 

Bjarn Brunenberg 12:19

Yeah, definitely. So I was talking about the volt on app we have, and so don't do navigation app. And I'm actually, over the last two years, I've been a lot of experimentation on mobile app. And so, for example, redoing me when people download the app in the first couple of screens we have the so called onboarding. And the last screen, we have paywall. So for our app, different two ways, or Google, navigation is that you have to pay 20 euro for us for a year, subscription based. Yeah, of course you can also pay monthly. But in those screens, I try to do optimize a lot. Let's say we have the first screen and a couple screens in between and paywall. And I learned that I was trying to do a little incremental change in all these screens, and I didn't see for a long time. I didn't see and really much conversion increases. I just saw a little increase happen. It took me quite some time to really figure out only basically on the left screen the paywall there was the biggest lever to pull for me. So we focus in the team, only up there eventually. And the biggest fail, I think, was that at some point we were like, Hey guys, we're doing all these changes on all these screens. I didn't really see that much impact. You know, how we can test this? Let's take all the screens out, all of them, until the paywall-- Everything goes out. And let's see how big of an effect this will have, how much we will lose. Because it was like, no, we cannot do this is, you know, the whole onboarding of our network goes this way. So, let's just test it. Okay, so I got everyone agreed to doing it. We tested it. What do you think happened? What did it do?

 

Gavin Bryant 14:24

I think that a lot of people were dropping out at the paywall.

 

Bjarn Brunenberg 14:30

Yes, well, if I was comparing so AB testing, what it will do with all the screens and what invariant was taken all out on the paywall, I saw that the middle screens didn't add any other value, like it didn't add any value. Was a flat result I got. So there that give me the indication, like, Wow, I did all this test before, and now I think with this test it doesn't really matter, but you are right on the paywall. did see a slight drop, but there was not significant. So now, we were in discussion, okay, we have done all these efforts in the beginning of all the screens. Were they really worth it? So what I want to say with this is that don't dare to test bold, don't be afraid to lose big, because, again, it can give you the biggest insights. And I got this insight that I don't have to focus on all the other screens. There's only one screen important that was available

 

Gavin Bryant 15:31

It's an interesting example. We've talked a little bit on the podcast about negative tests, so removing steps in a process which the process or the product, can get bloated and we can be additive over time in adding more steps and processes for the customers to your point that just didn't add any value. So, yeah, really interesting example. Let's build on that example a little bit further. So what's a strongly held belief that you had about experimentation, that you've since changed your mind?

 

Bjarn Brunenberg 16:13

That's velocity is king [phonetic 16:15], that's velocity is the way to go. I remember that back then in the academy and grow stripe looking at the slide, and they were probably presenting how many experience Facebook was doing, how much Netflix and the Booking.coms of this world. We're talking about 10,000 tests. Whoa, okay. We're kind of fixated that we need to go that way. And also in conferences, I heard a lot, and I do it myself. I ask people, hey, just to check the maturity of the program, where they are, I'd say, like, So, how many tests are you running? And I found that it's actually really strange, because the test of the velocity you're running, doesn't reflect the success of experimentation culture at all, so has something really coming back for, rather you are running 50 hundred or 500 experience per year. It doesn't reflect your successes. I rather want to discuss more the quality. It goes much deeper. It's more about the learning state and the insights you get from there. So think about not only the experimentation program, about the philosophy of testing, but also think about the outcomes, and especially culture. So for example, are you also tracking how much of your tests are actually being set into production, or think about more from culture aspect, because I believe experimentation is really driven in the culture as well. How many colleagues have you tested? Have you educated on AB testing? Or how many colleagues were first doing something in experimentation, and how many colleagues are doing now over the past quarter? And have you improved that? So to conclude here, I have a strongly held belief that philosophy was the way to go. But no, I am coming much back from that. It's much more about quality. I don't care if you're running 500 experiments or 50, the ones who are doing 50 do it really well, very good quality, and exactly hitting the key metrics you want to hit. They probably do much better job than those people within 500 and maybe hundreds of them are button color tests. 

 

Gavin Bryant 18:49

Now, really good message to balance velocity and quality. Let's shift gears a little bit for our conversation, and the key thing we want to focus on is a practical discussion around the strategic experimentation framework. What we'll do is post a link to the Miro board. Do you still have the Miro board available now?

 

Bjarn Brunenberg 19:15

Yes, yes. We still have available.

 

Gavin Bryant 19:16

So, make sure that you check out the Miro board after this. So you can see what the structure and the end to end framework of how that strategic experimentation approach should look like. But the three key things that I wanted our listeners to take from this next conversation and the first one, what is a strategic experimentation framework. Why is it important? And why, as an experimenter, do you need to do it? And then, how do you do it, and what does it look like on the ground, day to day, week in, week out. So firstly, the thing that I wanted to ask you was, how did your strategic experimentation framework come about? What was the problem that you're facing into, and what was the inspiration behind it?

 

Bjarn Brunenberg 20:11

Yeah, so it came about, I think, basically from frustration. So every quarter we are and before each quarter, what we're doing is that we go into-- Management asked us to set up a strategic plan, basically, for the next quarter. We're working in targets, and I think everyone can relate to this. You want to have like a strategic plan, what you going to do for next quarter? What are your OKRs, what you want to achieve. So I saw everyone in our team creating these slides, and most of the times we're writing all the slides, and we're getting our graphs, and we're getting our data pulled in and explaining what we want to do. And think really hard, what the next strategic between brackets, next steps will be. And then when everyone's working on the slides for two weeks, then the big meeting comes, and then where management sits together with all of us. And say, we proudly present all of our strategic plans. And then at the end of the meeting everyone comes out of the meeting like, Yeah, we're gonna rock this quarter, and everyone is enthusiastic about it. But then the funny things happened, Gavin, it's always the case I see every time again, is that when we had that meeting, I think two, three weeks later, no one really recalls what they really wrote down. And that what management really want to achieve, what teams are really doing on the ground, that gap is, then again, widening somehow. So that happened also at TomTom, and I was really wondering, so I was sitting in that meeting, and I was really wondering, how can I change this. How can I write my strategic plans? I have rights what I want to achieve, but how can I really tie it, so that we keep on focusing on what we said. And that's where basically the whole experimentation framework came about. It was really about, how can I connect what management wants to achieve with their ambitious goals and with their OKRs. And how can I connect it to experimentation? Because at the end of the day, we believe that our job as experimenters is simply to help management make better decisions. And so that's how the framework came about. Or I think we should we touch up on it later. But it was really about, this is what management wants. How can I break that down and translate that and how experimentation can help to drive these business goals and make better decisions? 

 

Gavin Bryant 23:03

So thinking about some of the struggles that teams experience. So you mentioned that there was a disconnect between the expectations of management and then the value creation and the delivery of teams on the ground. So how are these struggles represented on the day, week in, week out, with teams? What are they doing? What are their challenges?

 

Bjarn Brunenberg 23:35

Yeah, there's a good question as well. So indeed, the gap is really there, like, what management really wants to achieve and what eventually teams are doing in experimentation. And I've seen very few teams actually taking a very strategic approach towards testing, and with management and techno strategic approach, is that what we normally do-- I think I'm guilty of, is that as well, is that we sometimes put management in a corner of saying that they are called hippos. So we say, like, okay, the management are the hippos. I think it has a very negative part to it. I think we should not do that. But again, I understand where we are coming from. We are saying those are the high based persons opinions. They always have strong opinion, and they basically tell me what I need to do. But let's not forget that management is there in that role for a reason. They probably have, they have more experience in us. They really know what is good for the business and also which direction we should go. So they have a certain vision and with a certain vision the thing is that it's not wrong that they have that vision, but we need to work towards that vision altogether. And I see this big gap in the alignment there. It's actually a couple of weeks ago I saw this study by Harvard Business Review where they were saying that they were-- They did a research on 500 employees from 12 different companies, and they asked a simple question to those users. They said, Do you think you are aligned with your company strategy. And 82 of the people said, Yes, we think we are 100 percent line with the company strategies. Okay, they said. So if we ask you now to actually write down or actually put it into words what it really means, only 23% of the people were able to do that. So in that study, that was a gap of 60% that's a huge gap, and that is in that study. So in this it has everything to do with alignment, and I think that's where we try to need to bridge and then what I see teams doing in experimentation, is that it's not really their fault, in sense of like, there's no-- I think the problem there is that there is no strategic alignment. And because of that, sometimes teams think, oh, okay, I don't really understand what mentioned once, we're just testing out stuff. It feels like maybe a little bit more like spaghetti testing. So yeah, it is a really big gap between what management wants and what experimentation generally doing. And it's actually a shame, because we can do much better. And like I said in the beginning, I think our job as experimenters is really to how can we help management make better decisions? Because on the end of the day, we have to do we know how to validate quickly. We move quickly. We have the data. Management, most of the times, pays a lot of things on gut feeling, because it's the best what they have, so incentive [phonetic 27:27]. But if we can support them in making better decisions through data and validating through experimentation, that is a huge win for both parties. 

 

Gavin Bryant 27:36

One of the things that I’d just like to add there and build a little bit on that data point that you shared in doing some research for this episode, I had a look at a presentation that Bjarn had done previously, and at the start of that presentation, he asked a quick poll of participants. And participants in this webinar were experimentation teams and people, the question was “I struggle to make an impact with my testing program on the company's strategic goals”, and 77% of people answered yes, so nearly 80% but then there were further 15% who said, not sure. So if we bring both of those data points together, 92% of people working in experimentation are not sure or admit the work they're doing is not impacting their company's strategic goals, which I think is very surprising for me, and it's also very worrying on the same token. So, I think a general rule, and like a bit of a principle, that's starting to emerge, is that a lot of testing programs, they're just not having the impact that they can and then they're not really moving the dial. And to the point that you mentioned there before is because there's this gap and disconnect that exists around strategic alignment?

 

Bjarn Brunenberg 29:06

Yeah, 100% percent, I was so surprised by that poll as well. Could you bring it up? Yeah, again, I think there's also a frustration from the experimentation side. I'm not the only one who has this frustration. You as experimentation you want to get the best out of the best. You want to get the highest results, but if you just don't know how to achieve them, that can be very frustrating. And again, it has everything to do with, event with alignment-- It's not to do with the hippos being in the Eiffel Tower up there, and we just look at them like whenever they're going to pass it to us again, it's really the right alignment. So think about it-- Think about like this-- If you think you are not testing strategically, think about how we can do it actually. So I think a very easy step to start with this is to ask yourself, Okay, how many of my tests I've been doing over the course last couple of weeks or quarter? How many of those are actually tied to business objective mentioned for you I'd like to see. So again, if you're doing some testing on the PDP, on some CTAs there, and you have a 20% uplift, that's great, and you're celebrating that with your team. But if management doesn't care about that, if that is not where the money is for the business, no one really cares about it. And that's as a screw as it is. No one cares about it then, so find a way where you can bridge the gap of like, where management actually cares about, where they would like you to see the test, and where you can help them uncover the biggest questions they have.

 

Gavin Bryant 31:02

Well, let's dive into the framework then, and give people an overview of what that framework looks like, so then they can start to think about how they can upgrade or level up their process and approach in their organization, and the way that you think about the framework. It's loosely broken down into three phases; phase one strategic alignment, phase two is opportunity discovery, and then phase three is solution discovery. So let's start off with strategic alignment. What are some of the key things people should be thinking about during this phase? 

 

Bjarn Brunenberg 31:40

Yeah, exactly. So strategic alignment, it's really about what management wants. Is most of the times, they always say we want to have more revenue. At the end is always about revenue. And that's a very big goal, and it's also a very lagging indicator. So as teams, we cannot put it into revenue, because, for example, if we were four teams, for example, and in total, we got a 20% increase in revenue. We don't know which team actually created that value. Could be that the team one created 50% more revenue and the other teams actually brought revenue down. So revenue is not a good indicator. So what we need to do is, we need to break that down. And what I mean strategic alignment in the first step is, how can we break down revenue into OKRs objective key results. And I found a very simple framework for this, and just basically, it's called KPI trees. And what it does is, you go from revenue is very simple exercise, so everyone can do it. And that's why I did also in the Miro board, which will be shared. Basically, you write down your revenue, and then you start breaking, you break down revenue into your most important KPIs, so your most important key performing indicators, really high business level. And if you have those KPIs, then you're not there yet. You need to break those even further down to sub metrics. Why? Because KPIs are most times lagging indicators still, and we want to transform those into leading indicators. So if you break those KPIs down into sub metrics, get metrics that are really good indicators. So for example, if I use TomTom as example, we have the go navigation app. Let's say we want to have 50 million revenue.? Okay, break it down into KPIs, so from indicators, and for example, for us, people downloading the app, people starting a trial and converting to paid and the activation rate. Let me specify the activation that has a KPI. I break that one down into sub metrics. So what is a metric? A leading metric that really drives activation for us? It's very simple. In our app, we want more people to drive. So the more people drive with our app, we see that more people are sticking with us. Hence there's more activation. It's like, Ah, that's a leading indicator, if you want, you can even break that further down. But the point here is that if you do this exercise from all the way up in revenue to KPIs and sub metrics, and then transform that into an OKR, that is-- The exercise was so interesting to me because one. It's so simplistic, but when you start doing it, you figure out it's not as simple as you think it is. Everyone think in their mind, ah, it's super simple. I think I know everything, but when you try to put it onto paper, it's much more harder job. And I think this is the first step of getting that alignment, like bridging that gap between what management wants and what we actually try to do.

 

Gavin Bryant 35:26

Now that's a really good example. Thank you. So do you think this is really the crux of it? Like this is where things start to break down for teams is understanding and measuring the right metrics?

 

Bjarn Brunenberg 35:41

Yeah, definitely. I mean, when you understand the metrics, you want to pull together as what management wants, or what high level management wants, and what teams are actually doing. If you have alignment on that metric, then what you have is, like, simple, really look, every week, we have a weekly meeting looking at those specific metrics together. So management in the meeting, and also the teams, we always looking at those metrics for what we are doing. So what it does is that it creates some synergy together. It doesn't create a distance, really. We're all looking at the same metric, in the same direction we like to go. And if some metrics are down, metrics or up, if metrics are down, we say, convention ask is like, why is metrics down? And other teams are explaining why those are down, because we're really on the ball of what every team is doing, particularly on those metrics. And again, that's really helps in creating synergy, in creating being more involved together. And eventually there is very crucial in experimentation. Because I think a lot of people also in experimentation are struggling with stakeholder management, and how do I get management buy in? And this is a very, very good first step to do. 

 

Gavin Bryant 37:05

So let's move along and think about step number two. Phase number two, option new discovery. What are some of the things that teams should be considering once they've defined their metric trees and their goal trees?

 

Bjarn Brunenberg 37:18

Yeah. So basically, I've been working on this, actually for a couple of years to get the right setup for this framework. And like I said, I actually used-- I found two very interesting sources. One was in the KPI tree, and the other one is the opportunity solution tree from Theresa Torres, and I found it and fell in love again with that framework. And what I did is basically combine those two. So now talking about the opportunity space, but this is from Theresa Torres. For the listeners who don't know her she wrote a great book, and she's a very respected product manager. It's a great book to read. Again, it's so simplistic the framework, but yet so nifty. So in Opportunity Discovery, what we do so you have your you broke down your KPI key, what you said you have your OKR. So basically, now you know where you want to focus upon and which metrics. But now we're trying to translate that into an opportunity. What does that mean? What is an opportunity? Opportunities are basically everything that you find through research. So it's really about combining qualitative and quantitative research together. And all the aspects you find there you can find things in GA or a Mixpanel, the hard data, but also the soft data we are really talking about, user, interviews, surveys, etc. You combine all these learnings together, and what you then get is what you do, which also will be-- You will see in the Miro board if you click on it, what you see is that you basically write and all the insights you have of all these different sources. And then if you have all these insights together, you try to group them, find patterns. So you find certain group insights together, and at some point, there starts a pattern to emerge, and some insights to emerge. And that is really where the magic happened. And it's also a really nice exercise to do with your team. Things get very creative. And people, everyone sees, wow, this is a topic that emerges. It's really nice if people recognize patterns from those patterns what you see to emerge from all your data insights. Those are basically your opportunities. So for example, for us, we saw that a very easy one in the data. We saw is that people who drive more than once with our app tend to stay, have a 60% higher conversion rate. So the conversion to paid, so that's a great insight. And that insight eventually translated into one of the opportunity that merged is that we started a lot of frustration was for us on the paywall. So people were expecting for our navigation app to actually to be free, same as they were using for a ways or Google Maps, and when they hit the paywall, they were a little bit frustrated. And that opportunity we saw, so hey, users are frustrated when they hit our paywall. That opportunity only came across because of all the other insights we gather before and now, because we knew that, knowing this exactly from multiple sources, really data backed, I started to really dive into that opportunity. How can we reduce the frustration on our paywall, and how many tests can we run for this. And that's about being strategic. If I just go further on the next one, then Gavin on point three, that's okay, because then we tap also into solution discovery, which is point three, and that is eventually where we are all very good at as zero people next from age people, because here is where we get the IDs, here we try to solve the problems. So again, going back to the opportunity, hey, there's a lot of frustration on that paywall. Now, I sit together with the team like, Okay, we know this opportunity. There's a lot of frustration to paywall. What I do then in a brainstorm session. I'm not sure if you're familiar with it. How Might We framework, Gavin?

 

Bjarn Brunenberg 37:25

Yep, it's amazing.

 

Bjarn Brunenberg 38:42

It's a very powerful framework for brainstorming I found. So what I do then is that bring opportunity translated into, how might we framework? So what I do then is like, how might we reduce the frustration on our paywall? And that is the main question everyone has in the brainstorm session with five, six different colleagues, and we all brainstorm about this with this specific question. Now, brainstorm is really key here, because if you go to the brainstorm session, I don't ask anyone, hey. Okay, so this is the question, how can we reduce the frustration, everyone just draw IDs. No, no. What I will try to do here is that if everyone 10 minutes time, first think individually about solution, you want to do, you want to come up with, and after those 10 minutes, we basically look at all the different IDs people wrote down. But I'd say it becomes because I try to do a one hour brainstorm session, if we go to old IDs, it becomes way too much. So the second step is I ask them, okay, pitch your two best IDs, and then from those two best IDs we pitched, we select one. So in the end, if I have six colleagues, at the end, each colleague pitch two IDs, and we pick one of those. If we have that all in total, what I think is very strategic here. It was the beauty of this framework is that if you do like like this, you have six IDs at the end, really specifically about solving that one problem, about getting rid of the frustration paywall. And there's six different IDs, and you'll be amazed that those six IDs will be completely different still, no one is doing the same things. I have rarely seen people having the same IDs, and that's where it's all about. Solving those opportunities, breaking down the solutions, and try to solve them strategically. And why I say this strategically? Because eventually it came all the way back from your OKR. We talked in the beginning.

 

Gavin Bryant 44:42

Yeah, I think it's really interesting to understand how the whole process works through the line from business strategy right through to those testing opportunities. One of the, I guess, metaphors and ways that I like to describe it, it's like a chain. And all of the links in that chain, from the experiment back to the business and company strategy, they need to remain really strong and solid bonds. However, if any of those bonds become weak or they're broken, it then diminishes the impact that experimentation can have, and any of those touch points that you've mentioned throughout those three phases, if they are broken, then the chain can break. One of the things that I wanted to get your perspective on, so people now understand why strategic experimentation is important. They understand the key steps involved for each of those phases. What are some of the watch outs or symptoms that people can keep an eye on which may be an indicator that their experimentation process is maybe broken, or it's not as strategic as it could be. One of the things that you touched on before was that not enough experiments are being productionized, which can be an indicator of experimentation quality. Are there any other things that stick out to you?

 

Bjarn Brunenberg 46:24

Yeah, so I think it all starts already with if; do you really know where your expectation for-- Do you have a really clear goal? Are you just running tests for the sake of testing? And are you testing because you have been told to do so. Or are we also thinking about, okay, how can I be move the needle for the business? How can I solve the biggest questions the business has, and how can experimentation help there? So, for example, you need to understand, if you realign also with what management wants. It's my second point. So testing exactly, are you moving the needle? Because experimentation eventually is all-- It's just a tool. It's a method we're trying to do, and it is about we want to validate quickly and testing quickly things, to validate the bigger objective, and to get better decisions. That's really what it's all about, and making better decisions for who are they, for us as a team, yes, but also definitely to help management there. So if management there, if you think that management there, their tool is their gutsy and that is okay, because, again, that's what they have. But if you can support that with data and say, Okay, if you ask your manager, what are the one of the biggest problems you see right now, or what is the biggest questions you have? Just go in a conversation with them, and then try to figure out, okay, those questions you have. Hey, I can easily test that. Or hey, I can easily validate if we are on the right path or not. For example, I don't know, maybe a manager thinking about opening a new channel. Is that channel something for us? Are we going to make a lot of business value out of there? You can actually quickly test in a couple of weeks. If that channel is actually worth investing for or not. Before responding a lot of advertisement and money on there you can quickly test and help that manager and making a decision there. So that's a really big one and other things, if you have a new team and you think like, I might need to be more strategic is if you see that your team is really chasing quick wins really like we're testing and at the end of the year and Oh, Christmas is coming up, and some businesses have that they really need to make the most money here, chasing quick wins and doing some hacks just to get your results in on time. Yeah, that's not really the way to go. And what you get then is also really spaghetti testing. Actually, when I heard spaghetti testing, I was a little bit derailing, but I really interested how it came to say so, spaghetti testing is really about the metaphors use. Because apparently, when people were cooking spaghetti, one of the ways to check if your spaghetti is is correct. You get it out of the pan and you throw it against the wall, and if it sticks, it's supposed to be good. and I never was like who throw spaghetti against the wall, good luck getting it back in your pan. But the same thing applies to testing. If you just throwing spaghetti in the wall, and you hope something sticks, you hope your tests are sticking, just shooting with hail, and that's not the way to go forward again. It comes with quick wins and hacks. So the way the actually, the opposite how you should do it, is set clear goals, get alignment between management and what you want to achieve, test strategically. And I will ensure you this will lead to more impactful test results, maybe not more velocity, but more impactful tests, and that's why I think growth teams will grow stronger here.

 

Gavin Bryant 50:47

Yeah, the key takeaway that that I have from the walkthrough of the framework was that teams need to spend the time upfront to do that discipline thinking and really understand their opportunity tree, their metric tree, their goal tree, whatever way we want to describe that in excruciating detail, because ultimately, that's where all our experiments point back to, and that's what we're trying to affect and impact. So your top piece of advice to ensure organizations conducting strategic experimentation. What's your one thing?

 

Bjarn Brunenberg 51:31

One thing, start doing the exercise KPI tree right now, and also have a look at the opportunity solution tree. Often do that exercise. It will blow your mind how simplistic it is, and you will get so many insights just running it down on Miro.

 

Gavin Bryant 51:53

Okay, let's close out with our fast four closing questions. These are just four quick fun questions. Number one, what are you obsessing about that we should know about?

 

Bjarn Brunenberg 52:06

I obsess about efficiency. I love lean experimentation. So in our team, I don't have the privilege of having a lot of developers and a lot of designers. We are really lean team. And I came to realize that if you try to push the boundaries right for getting more impactful results, you need to be more lean in moving from 50 to 100 is still okay, but when you move from 200 to 300 experiments, the things need to change in your process, and that's why I'm beeping out about optimizing processes. Automations also brainstorm with things, bringing things together, and thank God AI is there to help me nowadays. 

 

Gavin Bryant 52:55

How about outside of work? What's something in your personal life that you're passionate about and obsessing about?

 

Bjarn Brunenberg 53:05

Personally, I love to drive a motor personally, so I'm a big fan of driving my motor. Going here in Portugal, the roads are really good, and it's this amazing scenery, which is something I love to do. And I love network building. So I'm really looking out for people here outside to get more inspiration from to learn, to get more people to know what I mean is, for example. So my wife is Brazilian, and trying to learn a language which isn't always-- Portuguese is not-- I'm trying to get there. What I'm trying to combine as well setting my own goals is, for example, set the goal that I would like to be on the stage of-- Growth conference in Brazil, is in Sao Paulo, and to achieve that, I need to get my Portuguese upscale. I need to be connected to the Brazilian community, for example. And that's something I'm going to enjoy lately as well a lot. 

 

Gavin Bryant 54:16

Well, maybe this podcast episode could be a connection into Brazil, because we do have some listeners in Central and South America. So, who knows? 

 

Bjarn Brunenberg 54:27

Amazing. 

 

Gavin Bryant 54:28

So does that mean that you are spending a lot of time on Duolingo?

 

Bjarn Brunenberg 54:33

I started Duolingo. It didn't really work for me. I get basic sentences. So it's like, yeah, the cat drinks milk. I'm not really going that far with it in Brazil.

 

Gavin Bryant 54:47

Okay. Number two, what's the biggest misconception people have about experimentation?

 

Bjarn Brunenberg 54:53

Yeah, experimentation is AB testing, and that is not true, because there's so much-- If you think about experimentation, there's so much more than just AB testing, anything the book-- the guy called-- wrote a book from strategizer,

 

Gavin Bryant 55:18

Testing business ideas.

 

Bjarn Brunenberg 55:21

Yeah, exactly that one, yes. He talks about 50 different ways to validate your hypothesis. And only one of them is AB testing. So I think that is a really big misconception. As people talk about experimentation, it's about only AB testing. And another one is, what I cannot get my head around, is that sometimes people say AB testing or experimentation sounds like a lot of work, sounds like additional work. And they'd rather don't want to do it. And I guess me, because it's completely not true. Okay, maybe in the first week, to get in your head around what experimentation really can do for you. Maybe there's a little bit overwhelming beginning, but it helps you so much in making good decisions. We validate things very quickly, instead of you being doing the things you've all been doing for the past five years... Now, I get it, someone comes in, it's like, hey, I can help you, and I can do much faster. And it's called the experimentation. I get that--People have to get used to that. What I don't get is that they say it costs more time because it's quite the opposite. It reduces a lot of time. We can move much faster. And yeah, especially product I found is. Still see this like more traditional product managers who write down their roadmap and this is exactly what we're going to do for the next two months. I cannot really change that. I think the experimentation is really I can make a really big leap forward there in supporting those people. Don't get me wrong. It's not about fighting. We need to use experimentation as a tool to get people doing their work better and more efficient and make better decisions. 

 

Gavin Bryant 57:23

Yeah, I've never understood the time argument either, because the additional time to conduct experiments is the time to learning, and if I was working marketing growth product, an experimentation is the fastest way to be more successful and effective in your role. So I don't understand that argument. I think, people don't quite like the accountability that experimentation affords. And sometimes they like to be able to just pick winners and to implement, and it highlights bad decision making. Number three, what continues to surprise you the most about experimentation?

 

Bjarn Brunenberg 58:15

After all these years my experience, a six years experience that I think by now I know somehow the outcome of some tests, but how often I'm still very, very wrong. I mean, I think we humans are just horrible at predicting. We just can't. We don't have crystal balls. I thank God we don't have because otherwise I don't have a job. But that's thinking that you know it, but you don't know. So biggest advice here; be humble and there to say you don't know it, and then that's together. 

 

Gavin Bryant 58:57

Yeah, I agree. We're absolutely terrible about predicting what customers will find valuable, and we've never been at a point in time and history when we've had so much data and information about our customers. But some days, it seems like we know so little about what they actually need, their desires and what their motivations are. And I think, if we look at the statistics for successful experiments, we see a lot of the leading companies put out into market now that you know, around 70 to 80% of those won't work. And each one of those many hundreds of 1000s experiments is gone through some sort of prioritization process where they've been prioritized for customer value and business impact. So that's something that really intrigues me. Number four, what's a personal struggle with experimentation that you've overcome? 

 

Bjarn Brunenberg 1:00:01

Yeah, that was, I think, really moving from the velocity side to quality. So it was really about the shift from focusing on philosophy, doing more, growing my test velocity and shifting to quality. So I figured out that the more your experimentation program matures, I thought it would get easier. I only first do 50 experiments, I had these problems. I scaled up to 100, I would not have these problems anymore. But I figured out that the more your experimentation program matures, you will land in different phases of the maturity program, and you will have different problems again, and those problems are even getting bigger and more difficult to solve. So, for example, getting tested in, that was a very, very very easy one at the beginning. So I want to have a lot of teams, very different teams, getting the test studies in. So what I did is super simple. I connected the air table form to a Slack workflow so people in my company, could always find the workflow and just super easy type in everything they have, and it will land into my air table on my database. And I thought it was a great IDs. I was to get IDs in, but the quality of those IDs were horrible. They were just that so long people filled in everything, but most people just wrote down maybe the title of the experiment and maybe something what they meant. But hypothesis was always blank and success metrics was always blank. So it's like, oh yeah. Well, this is more hard. So I need to be careful here as well, because if you just don't look at these, people also don't feel like adding more of these at the end. So, yeah, that is a combination, for example. Always a discussion why this velocity isn't quality, compare quality. I think my standpoint in this Gavin is that in the beginning it's good to have the quantity up and running so people get a little bit used to it, but you need to focus on quality as well, because otherwise your program just becomes really hard to manage everything. So there was a personal struggle. There was actually more persons 

 

Gavin Bryant 1:02:46

Our audience and listeners will know that recently, I ran the first experimentation Summit here in the Asia Pacific region, and during my welcome to the group that I used the metaphor of experimentation and being like a marathon, that when you're training for a marathon, it never gets easier. You just go faster, and experimentation is a destination you never reach. There's always more work to be done. So I think that links in nicely to your point there.

 

Bjarn Brunenberg 1:03:23

Absolutely true. Yeah, very good metaphor.

 

Gavin Bryant 1:03:25

Let's call it a wrap on our conversation today. And thank you so much for joining us today. I really appreciated the chat and to our audience and listeners. I hope that you took a lot away from this super practical and detailed discussion around what a good strategic experimentation approach and framework looks like. We'll link through to all of the documents that we've discussed in the show notes. And thank you so much for your time today Bjarn, really appreciate it.

 

Bjarn Brunenberg 1:03:57

It was amazing to be here, Gavin, and thank you so much. 

 

When shifting my philosophy from focussing on test velocity to test quality, I thought that as the experimentation program matures, things would get easier. When I performed the first 50 experiments we had certain challenges. When I scaled up to conduct 100 experiments the problems changed. The more your experimentation program grows, and you transition through the various stages of program maturity, you will have new and different problems that are bigger and more complex to solve.”


Highlights

  • It’s perfectly OK to not know all the answers. Have the humility, curiosity and humbleness to find a way forward through a process of testing, failing and learning

  • Be bold and be brave. Be prepared to take big swings without being worried about the repercussions of failure. New breakthroughs and innovation come from big bets. Resist the urge to play it safe with many small, incremental changes. There are more learning opportunities from big, bold moves than small wins

  • Negative Tests - A product or user experience can become complex and bloated over time due to the addition of many layers of product change. Products are built in piecemeal fashion and can become over-engineered. Product experiences decay over time. What worked at one point, may not work now. It can be difficult to understand what is truly driving value for users in your product

  • The number of tests being performed by an experimentation program is not necessarily a good marker of success. The quality of learnings and insights from experiments is more important. How many experiments are being productionised?

  • Why a Strategic Experimentation Framework? Because there is often a disconnect and misalignment of management expectations and the work performed by experimentation teams. A strategic approach to experimentation ensures experimentation teams stay tightly aligned to organisational ambitions, goals and KPI’s

  • Research indicates that teams don’t understand company strategy in sufficient detail in order to impact strategic objectives - 77% of respondents in an online survey suggested that they “struggle to make an impact with their testing program on the company’s strategic goals”. A Harvard Business Review study (n=500) discovered that only 23% of people were able to write down or put into words what their company strategy means

  • The Strategic Experimentation Framework as three phases (1). Strategic Alignment (2). Opportunity Discovery (3). Solution Discovery

  • Phase 1 - Strategic Alignment: Define the business goal that everyone in the organisation contributes to (FY25 $30M revenue) > Connect your business goal to high-level business metrics (KPI’s) to measure performance (­­Increase Trial Activation by 11%) > Break lagging KPI’s into smaller Sub-Metrics that you can directly influence with experiments (Increase First Drive 7-Day Trial by 32%) > Define your Quarterly Objectives (OKR’s) that you want to achieve (Optimise onboarding conversions by 100K)

  • Phase 2 - Opportunity Discovery: Conduct Qualitative and Quantitative research > Identify key insights from your different research sources > Define patterns and themes from key research insights that form strategic opportunity spaces

  • Phase 3 - Solution Discovery: Use ‘How Might We’ statements to frame up Ideation > Conduct ideation activities to generate new solutions to solve strategic opportunities > Prioritise solutions for testing

  • Metric Mapping - Use Goal Trees or Opportunity Solution Trees to identify metric architecture for experimentation (Business Goals, KPI’s, Sub-Metrics, OKR’s). Make sure to translate Lagging Indicators into actionable Leading Indicators. Gain alignment on metric tree with key stakeholders

  • How Might We Framework - Use the HMW framework to structure up ideation activities to ensure targeted and focussed solution generation

  • One of the biggest misconceptions is that A/B Testing is the only form of experimentation - there are many different types of experiments that can be performed to learn from users and customers

  • Experimentation Objections - internal teams will often suggest that experimentation takes too long and slows down the release cycle. This is a non-argument. The additional time to conduct experiments is the associated cost of learning – to make better decisions, decrease risk and build better product. For all Marketers, Product, Growth and Experimenters it’s the fastest way to be more successful and effective in your role

In this episode we discuss:

  • Be brave and don’t be concerned about the repercussions of failure

  • Negative tests are like Ozempic for experimentation. Get rid of the fat

  • Experimentation velocity isn’t a great measure of program success

  • Why so few people can put into words what their company strategy means

  • Why there’s a need for a more strategic approach to experimentation

  • An overview of the Strategic Experimentation Framework

  • How to get strategic alignment on experimentation objectives

  • Identifying strategic opportunities for experimentation

  • Frameworks for structured and targeted solution generation

  • The biggest misconception about experimentation

  • Dealing with common objections to experimentation

 

Success starts now.

Beat The Odds newsletter is jam packed with 100% practical lessons, strategies, and tips from world-leading experts in Experimentation, Growth and Product Design.

So, join the hundreds of people who Beat The Odds every month with our help.

Spread the word


Review the show

 
 
 
 

Connect with Gavin