October 31 2024 • Episode 022
Adam Furness - Atlassian - Designing Growth Experiments: Experiences Your Customers Will Love And Pay For
“ You need to plant new seeds and uncover new opportunities. Path finding projects, where you're doing heavier discovery at lower confidence, you might be making bigger, more disruptive changes to your experiences. With these projects you’re reaping bigger rewards and creating new growth levers for the business. There needs to be a sweet spot of optimisations and path finding that relates to strategy, growth trajectory and risk "
Adam Furness has spent 10+ years leading world-class product design teams that create value through design excellence. He’s worked as a design lead and manager and has built and scaled high-performing teams across APAC and the US.
He’s currently Design Manager on the Growth team at Atlassian, where he’s spent the last 5 years leading full-funnel experimentation teams. He believes that ‘fast’ and ‘good’ can co-exist, values autonomous teams, and a critical, human-centred approach to product and problem-solving.
Before Atlassian, Adam was design lead and manager for US sports technology company Hudl.
Get the transcript
Episode 022 - Adam Furness - Atlassian - Designing Growth Experiments: Experiences That Your Customers Will Love And Pay For
Gavin Bryant 00:04
Hello and welcome to the Experimentation Masters Podcast. Today, I would like to welcome Adam Furness to the show. Adam has spent more than 10 years leading world class product design teams across Asia Pacific and the United States. He's currently Design Manager on the growth team in Atlassian, where he spent the last five years leading full funnel experimentation teams. Prior to Atlassian, Adam was design lead and manager for US sports technology company Hudl. Welcome to the show, Adam.
Adam Furness 00:38
Thanks for having me, Gavin, great to be here.
Gavin Bryant 00:40
Thanks for joining us today. Just a quick overview for our audience and our listeners, Adam is one of our speakers at the 2024 Asia Pacific experimentation summit this year. So Adam, thank you so much for being so generous with your time and finding some space in your busy schedule to give back to the community and present at the conference. Really appreciate it.
Adam Furness 01:05
Awesome, I'm really looking forward to the opportunity to speak about experimentation and growth and the role of design.
Gavin Bryant 01:13
Hey, let's start off a little bit about you. What's been happening on your personal Journey and your background today?
Adam Furness 01:21
It's such a big question. I think it depends how far back you want to go, but I guess short story, problem solving and creativity has been part of my background for as long as I can remember. As a kid, I love to draw and make things, and being a sort of child of the 80s and 90s, Lego was huge for me. So going all the way back then, that was part of my experience as a child, I then moved into as a student. I studied fine arts with a painting major, which may or may not surprise some folks but I then transferred into design so that I could direct that creativity into something that I could make a living out of. So I completely sold out. And then I began my early career in graphic design. So my work, I guess, gradually transferred from the printing press onto the screen. I started in digital design, and I found myself drawn to the problems beneath the surface and learned. There was an emerging practice called UX that back then there was no courses, and it was still very much this kind of nascent, nascent thing. And yes then I moved into agencies, working mostly on big website builds I was designing and art directing and then leading small UX teams. Transitioned to product design, like you said earlier, working for a sports tech company called huddle, which is all about designing video analysis software, which is cool to kind of marry my love of sport and design and then, yes, for the past four and a half years, I've been on the growth team at Atlassian, and as you said, working across the entire funnel, from sign up and acquisition through to monetization and engagement. But I spent most of my time on what we call the cross flow team, which is all about helping existing customers discover and get started on their next Atlassian product.
Gavin Bryant 03:30
One of the things that I wanted to briefly touch on there you mentioned experience around big website builds. So thinking back to your time now and what you know now, how would you de risk those big website builds knowing what you know now through growth and design?
Adam Furness 03:49
Yes, I actually think that I was drawn to growth as kind of like a reaction against those big website builds. I think a lot of those industries were FinTech and in the finance space, and although my experiences were largely positive, I guess there are a few things there that didn't really align with my values as a designer. So we would do, well, actually research was commissioned externally, and then we get this big kind of tone to go through, and that was sort of three six months and then we would analyze that and create, essentially execute the entire site with very little customer contact. So I think I was drawn to growth initially, and drawn to experimentation because it gives you the tools and the mindset and the behaviors to be able to de risk things like that. So I think, I guess, to answer your question more directly, I probably wouldn't do things like that. I would probably try and identify whether, business value was coming from, in terms of their offering, and then begin to declare assumptions, break that down, and then essentially validate, generate insights for us to be able to make decisions and get to that value for the customer and the business as soon as possible. Not always an option when you already have an existing business, existing customer but yes, I think breaking things down, focusing on value, and learning as quickly as you can, all those things that we hold up as sort of beacons for great experimentation.
Gavin Bryant 05:43
Yes, there's a lot of research now that hence that the Big Bang is often a big failure.
Adam Furness 05:52
Avoid that Big Bang, yes, totally. I think some of those businesses in those earlier days. They really just wanted to kind of have a presence. So I think just considering what they were up against, I think a big bang can be okay, provided that it's followed up with a fast follow that can iterate on what you're learning in the market and what you're learning from, hopefully a well instrumented site.
Gavin Bryant 06:21
Let's zoom out and go broad for a moment. I'd be really interested get your thoughts on the role of design more broadly in experimentation and growth, if you could share some perspective around that, please.
Adam Furness 06:36
Yes, no worries. I think the great thing about--- I think healthy teams is they work cross functionally. I think especially on a growth or an experimentation team, there are a lot of the behaviors that are shared among crafts. I don't think it's only the engineer’s job to think about the code they're shipping. I don't think it's only the designer's job to think about the experience. So I think there's probably some like general things that I'd expect from cross functional people on the team.
So problem identification, I think on the design side, we tend to lean more into understanding unmet or latent user needs but really just kind of framing up what the potential opportunities might be, and also as an extension of that or development of that framing problems. And again, on the design side, I think that's all about identifying and validating meaningful problems to solve within that business or strategic scope. And then I think more specifically to design analyzing the current experience through things like journey mapping, heuristic analysis. I think often for an established product, especially a big product or a Legacy product. We don't always know our current experiences well enough.
So I think just analyzing that, mapping that landscape, I also think design plays a really key role in developing a product or feature vision. And I know maybe vision is a bit of a dirty word in the context of experimentation, but I don't see that as necessarily an end point, but rather like a kind of gravitational pull that helps to show what good might look like for the business and the customer, and I think day to day, which is probably the key thing that designers do, is they craft the experience that drives behavior change. And that's really fundamentally the core of experimentation, creating that behavior change that we're able to measure?
Gavin Bryant 08:43
So thinking across those 10 plus years of experience that you have now, what are some of the guiding principles that you've developed or your mental models for good design?
Adam Furness 08:57
Yes, this is such a broad area, and I think maybe you ask me tomorrow, I might have a slightly different answer, but I do think that there are a few things that we hold ourselves to in the context of experimentation, so a strong and clear hypothesis, it probably goes without saying, but I think this is something that you could spend a whole podcast or even conference on generating and being pretty good about your hypotheses. But I think for me, the hypothesis really is the first thing that you design as an experimentation team. So yes, a strong hypothesis underpinned by a really strong understanding of the problem, a really clear scope and also a strong rationale for why you believe you'll have the impact that you think you'll have. So hypotheses, I think confidence in usability is something else.
So I think its one thing to kind of proposition something to someone using your product, but if they can't, either they don't understand, comprehend or can't complete the task, then they're just not going to get value from your experience. So there's a usability confidence there. I also think your experience should be consistent and coherent, executed to a high degree of finish, or, I guess, like a good enough degree of finish, depending on where you are in your experiment scale, kind of life cycle. And also, I think it needs to play with a broader system. So it needs to be able to plug into your ecosystem, connect with your vision. That's probably a high level take.
Gavin Bryant 10:47
One of the things that I notice in my work is that many organizations, they can struggle with hypothesis, formulation, and, generating a really strong and well-reasoned hypothesis. From your experience, what do you think that is?
Adam Furness 11:05
A lack of critical thinking, I think fundamentally. I think sometimes people or teams will create a hypothesis that reflects the solution they think they need to build, not necessarily framing the learning that they're trying to get that element. I also think that some people treat hypotheses as like a set and forget thing they do at the start of the project. So they might create their hypothesis, then they move throughout their process. But actually think that really strong experimentation teams will treat that hypothesis as a draft. They might kind of revisit it as they create insights. As they form their thinking. They'll come back and kind of rework it. They may even create child hypotheses that kind of once they've got a better idea of scope. But that's something else that I think is important to consider.
Gavin Bryant 12:01
Yes, I think that's a really good point that you make there, that the hypothesis it's live, it's fluid, it's a dynamic and it's not something that you do once and forget about. It's something we should be continually revisiting throughout the course of an initiative and an opportunity, and continually looking to refine and then adjust it as we learn more, we have insights and where we're feeding and updating it so yes, fantastic point. Let's, jump ahead and really anchor our discussion today around a practical focus on designing for growth experimentation, which broadly will be your focus for your presentation the APAC experimentation Summit. One of the things that I want to touch on with you to begin with, is really a mantra that exists within experimentation circles. Design like you're right, test like you're wrong. What does that mean to you?
Adam Furness 12:07
Yes, absolutely, I wanted to--- This is a really interesting mantra. I wanted to get your take Gavin. Sorry to put you on the spot, but I wanted to kind of bounce that back to you. I've got lots of thoughts, but I just wanted to hear your stance as well.
Gavin Bryant 13:16
Yes, I think testing like you're wrong where we're suggesting. I think this comes back a little bit to the falsification principle from Karl Popper, which is, we're not seeking to prove our hypothesis to be correct, we're seeking to generate strong evidence base, and only in the instance that we would have very, very strong evidence. Would we reject the null hypothesis and accept our alternative hypothesis? And I think when we're thinking about designing like we're right, where we're designing the best version of our solution, within a pragmatic context, we're not trying to deliver and design the end state solution, but we're trying to design a solution that is acceptable and satisfactory to customers to learn.
Adam Furness 14:14
Yes, nice when I saw this term, because admittedly--- I'd not heard this term before, so I was really interested to kind of learn a little more. And obviously it's something that, it's a mindset that we do adopt in the experimentation context. The interesting point, from my perspective, is the designing, like you're right, I think sometimes the pressure of time, perhaps or lack, the possibilities of what is feasible, can sometimes get in the way of designing an experience that's really considered. Sometimes, we just have time pressures. Or hey, let's have as many possible shots on goal that we can but I think something that design especially brings to the table is ensuring that the shots that we do have on goal are in an acceptable kind of scope that we feel like they are going to have a chance of hitting the back of the net. Sorry, to stretch the goal analogy, but I think for me that's all about building confidence. And I think confidence isn't certainty. It’s a scale.
So I think designers will use the toolkit to build confidence via things like secondary research, customer interviews, reviewing quant data and so on. And I think from there like, ideally, you have a really high confidence problem to solve, and then, therefore, you're able to create a solution and a range of solutions that you're able to explore through experimentation. So I think it's about having confidence. I also, I'd also probably add as well, designing like you are right, it's especially important in a B to B or paid software setting.
So at Atlassian, you can't just ship anything. You can't just use that kind of scattergun approach. People demand a certain level of quality in the tools they use for work and in their livelihood. We can't afford to ship shit, and there are other constraints as well, like we don't have a lot of traffic, so we don't get a lot of opportunity at a scale of Facebook or whatever. So we don't get a ton of shots, and our customers expect more. So I think that level of confidence is really important in that mix.
Gavin Bryant 16:47
One of the other things we touched on earlier was being mindful of that design vision and usability, and I think in the early days of experimentation, that there was the notion of the MVP, where it was the cheapest, nastiest, fastest thing that you could ship to learn. And it seems like 2009 was a long time ago when that sort of philosophy first came to lack in tech and business. And it seems like we've moved on along from that, and we understand a lot more about what efficient, fast learning is and it's not the cheapest, nastiest version of the solution possible. It still needs to be a satisfactory experience that's strategically aligned and provides a sound Customer experience to learn.
Adam Furness 17:45
Absolutely, and we have this kind of this position on the team here at Atlassian that fast and good can coexist. It's not a matter of sort of one or the other. And there's also this sense of--- It kind of depends on the context within which you are experimenting, things that are really risky, things where there are a lot of implications of certain experience changes, where certain decisions might be one way doors, or certain decisions might be a certain cohort may be a bit more sensitive. You need to treat those different contexts differently. But for us, I think the bar that we set for ourselves is all about designing an experiment that is good enough for the user to be able to complete their task and for it to be accessible, it's not going to do any harm to the experience and to an appropriate level of quality and scope that we're able to test our hypothesis successfully.
Gavin Bryant 19:01
Adam I previously heard on another podcast, refer to the framework of good, better, best. So you just touched on the good element of that framework. Could you elaborate on the better and best components there? Please.
Adam Furness 19:16
Yes, typically, when we're starting out in a space for a feature, or on a new project. We have a lower degree of confidence, so we don't invest too heavily in the experience. And we typically aim for a good standard, as you said, better essentially takes that good standard and takes the execution finish to another level. And a tangible example of this is we might run some demand tests on an experience, and when someone shows intent, it might take them through the current sort of rough experience to get that next product. Once we've gotten confidence there is demand, we might then optimize that experience down funnel. So it's about execution polish, and it's also about ensuring the end to end experience is a little higher in quality, and that's for better, for best. Best is all about sort of rethinking the entire model altogether. So I guess solutions that are more kind of at the magic end of that experience scale, we're kind of taking away all the friction, and things kind of just happen. So yes, that's good, better, best. We do spend most of our time on that good end of the scale, and we do try and elevate the experiences where there is going to be an ROI, and where we are seeing a lot of impact.
Gavin Bryant 20:55
A quick question, you mentioned demand tests to understand intent. Are they sort of preliminary, fake door, painted door type test? What does that look like?
Adam Furness 21:07
Yet, not for us. We actually have a design checklist or an experience checklist that explicitly says; no fake doors, no painted doors. So not on my watch, not on products like JIRA that are potentially being disrupted by shinier, simpler products, but we don't want to make them even less satisfactory. So, no, no, we kind of steer away from that. What I mean by a demand test is that kind of good level experience, or if we're making a promise in product, someone using JIRA or someone using confluence, that we're able to deliver on that promise. In the experience, it might not be as smooth or as slick as we'd like for a demand test, but once we get confidence, then we kind of go from there, but no fake doors on my watch.
Gavin Bryant 22:04
So thinking about confidence, how are you evaluating and thinking about confidence?
Adam Furness 22:11
Yes, I think in terms of confidence for us is about like, do you have objective data that informs your opinion? And even then the way that you interpret data is still open to a bunch of interpretation and bias. But I think at one end of the scale, you probably have things like expert opinion and assumptions and beliefs, and at the other end, you have hard data at a certain volume that you're able to make decisions from. And that data, I think is quant and qual. I also think that we try and bake into our practice confidence building as a continuous exercise. So we're always looking to understand our funnel, understand how people perceive and their attitudes, the experiences we're shipping. So yes, confidence is a scale, and I think ideally you want to kind of build a high degree of confidence throughout your process.
Gavin Bryant 23:15
It's interesting. You touched on there the magnitudes and the scales of experiments, they can establish cause and effect, and then starting off at the lower ground level with expert opinions and some other types of research. Yes, it sounds like the customer, the customer period, the hierarchy of evidence there, and, yes, I always encourage people to try and climb their way up the ladder and the pyramid over time that you're progressively building layers of evidence on top of one another and each time that you climb higher up the pyramid, you're increasing the evidence with experiments and meta-analysis obviously been up to the top of that pyramid. So I think it's yes, I think it's a great way to think about it.
Yes, a journey over time. Okay, let's talk a little bit about customer problems. So thinking about the hypotheses you've mentioned, that it's just so absolutely crucial and critical to the whole growth, experimentation and design process. So how are you thinking about customer problems, identifying and prioritizing which ones are the right problems to chase and solve?
Adam Furness 24:08
Awesome, yes, I think that's a good point, the pyramid. And I think as well, like, not all teams can build confidence over time, right? Like it doesn't need to be, I don't think we need to get certainty in order to sort of do anything. I think there are contexts where you're able to sort of start with a low degree of confidence and dip your toes in and kind of see how people are reacting. It's not a kind of all or nothing approach. I think experimentation teams should be empowered to be as scrappy as they need, to be as kind of learning oriented as they need for their context. And if you're in an early stage startup, or if you're still looking for product market fit, like the opportunity cost of building a ton confidence might just be too high for you. So I think it's kind of that sort of it depends situation as well.
Gavin Bryant 25:33
One of the things that I wanted to quickly touch on there Adam, was perception reality gap. So how often do you see a disconnection between what you think you knew and what customers actually need? Value, want and their motivations? Desires?
Adam Furness 25:38
Yes, I think a mature organization, like Atlassian, we have a pretty well formed research and insights muscle, and we do a lot of, there's a lot of existing research, there's a lot of secondary research that we can do. So we typically start anything that we do with kind of understanding what we think we know today, and then it kind of depends on the initiative we might need to refresh our thinking. So there'll be an element of generative research that goes into that, and that could be quantitative analysis on a particular surface or touch point or funnel, and it could also be qualitative research, where we go and sort of refresh our understanding of how people are working, how they're using our tools, their attitude towards our tools, and so on. So yes, I think the first step is just ensuring you understand what you think you know and then generating new insights. We also have a practice of continuous discovery, which is just ongoing customer contact that we have, weekly sort of conversations with customers as well. So that isn't really about validating our direction. It's more about just again understanding how people work, what their attitudes are to our tools.
So really it's--- Once you have those insights, then we have a pretty strong practice of problem framing. I know it's probably one of those boring white boarding templates, but problem framing is really critical. So having the muscle as a cross functional team to really understand what you're trying to solve for the business and for the customer, and then from there again, your hypothesis is key. So making sure that your customer problem that you've validated, or that you have a high degree of confidence in is reflected in your hypothesis. Yes, so yes, I think that's a good summary.
That's a great question. I think a lot of--- It's my own personal opinion, but a lot of what we see isn't surprising. When you have a complex product, or a suite of quite complex products, and you hear things like, it's really hard to get started, or I don't understand how these things connect. It's not surprising to you things like that, but we do get challenged on certain details and certain perceptions. So I think I'd probably frame it like that. I'd probably say we don't learn a ton of new things, that there isn't a lot of dissonance between what we know at a big picture level, but when it comes to the details, yes, we do learn. Learn things like how people perceive certain kinds of entry points or certain UI treatments, or I think where the detail comes is where there might be more kind of dissonance between what we think we know and what we learn.
Gavin Bryant 29:25
I'm a big believer in zero distance to customer and keeping the perception reality gap as tight and as narrow as possible and from what you mentioned that it's this constant process that you're working through, week in, week out. And I guess one of the things that I was thinking about there is the sense of our product intuition. And it seems like that intuition is constantly being refreshed and updated, and it's constantly kept in check with cast to run customer needs, which ensures that there's not a significant reality gap that can open up there.
Adam Furness 30:07
Yes, definitely. I think there's a constant kind of tight rope walking between being really strongly opinionated and leaning on your product sense, or your expertise as a UX practitioner and being open to being challenged and being it's that strong opinions loosely held. Again, yes.
Gavin Bryant 30:29
Yes, I like that that. This need and push for data driven but we can't completely discount our product sense and our intuition that we have and developed over a number of years in being very closely engaged with our customers.
Adam Furness 30:49
Yes, because intuition isn't this sort of magical thing that we're all born with. Intuition develops from our experience and from all the things that we've learned over time. So yes.
Gavin Bryant 31:00
Hey, thinking about prioritization, you mentioned that you've only got a small number of shots to shoot. How do you ensure that what you're focusing on is going to have the maximum impact for customer and business given that you've got a small number of shots to take?
Adam Furness 31:20
Yes, I feel like there's so many prioritization frameworks. And DVF. I love this framework especially. I think designers do too, but I think for me, it's quite simple. We try and make it simple. We use a lot of the kind of approach and mindset is the DVF approach, which is desirability, viability, feasibility, which is the crux of that kind of product innovation framework.
So do we have confidence that people want this thing? Can we build it and can we make money from it? Or can it align with our product strategy? I think we do a pretty good job of kind of executing on that mindset. I think teams then choose whatever their kind of prioritization framework of the month is to work through the details. We also have a great bunch of tools at Atlassian, for prioritizing projects. With that aside, the only other thing that I sort of mentioned here is that we encourage teams to take a portfolio approach so like you would with your investments or your super kind of have this mixture between optimization work and more path finding projects.
So having the kind of mix right so you're obviously your optimization work is typically quicker. It helps teams, especially early in the quarter, get that momentum. It's likely to result in a bunch of smaller wins that that can kind of have that compounding effect over time but you also need to make sure you're planting new seeds and uncovering new opportunities. So path finding projects where you're potentially doing heavier discovery, that lower confidence, you might be making bigger, more disruptive changes to your experiences, but ideally they're the ones that you're reaping bigger rewards from, and they're kind of creating new growth levers for your business. So that's the other thing for me, I think, is just having that kind of sweet spot of a good mixture of optimization, path finding, which I guess kind of relates to your where your company is at, where you are on your growth trajectory, and also your appetite for Risk.
Gavin Bryant 33:39
Yes now I really like that multiple horizons and multiple solution spectrums, not just being focused on the core and optimizations which can be a bit of a cycle that teams get stuck in, and not so focused on differentiated offers and new innovations. One of the things that I wanted to touch on quickly, Adam, the DVF framework that you use, there's a tension point there between the V and the F. How do you manage situations where that you have an opportunity, and upon evaluating through the lens of the DVF, that you realize? Hey, this is going to be long duration, it's going to be difficult, it's going to be high cost. Do you have many of those situations that you're facing into?
Adam Furness 33:58
We do. I think from time to time, there can be an initiative that is either has a really strong technical dependency, especially those that have a combination of technical implications and a financial implication. I think in those contexts we spend the time to try and work out again, build confidence in those kinds of tradeoffs.
There is a specific initiative that I've got in mind, but I'm not sure if I can talk about it, but yes, it definitely happens. And yes those calls are made above my pay grade, thankfully.
Gavin Bryant 35:20
Yes, and in those types of situations that you're sort of balancing up the opportunity costs there that the time and the costs taken to potentially stand up this experiment to learn would potentially negate a number of other factors in consideration at the time.
Adam Furness 35:42
Yes I think that probably comes back to the portfolio approach as well and also understanding things like runtimes and how you’re gold. Rarely would we sort of stop an entire stream of work to go and discover this big, gnarly thing. So typically, those kinds of decisions are made more at a funding level. So do we believe this is big enough to trade off on all the other more certain things that we would have been able to address? So, yes, it's an interesting problem to solve.
Gavin Bryant 36:22
In thinking about some practical guidance for our listeners and our audience when it comes to designing experiments and designing growth experiments. What are some of the common mistakes that you see teams make and the opportunities for them?
Adam Furness 36:40
I think we touched on this one earlier, but I think not understanding your current experience, how it's performing and why. So I think typically, you have this kind of mental model of how customers are experiencing your product, whether it be an acquisition funnel, whether it be just your core experience. Your mind would be blown by actually sitting down with your customers and seeing how they go through these experiences. And which is why research is so important. But just really understanding the current experience, I think, and also forming an opinion and validating that opinion or testing that opinion through the data you're gathering.
So yes, not understanding your current experience, the big one; not knowing what your customers want or need, we make a ton of assumptions or worse, we dress business needs up as customer ones, that old chestnut. I also think the other one is not knowing how your small experiment contributes to that larger hole. So being a little bit too short sighted, I think what's that old adage of. If you don't know where you're going, any road will get you there. If you don't know your strategy, any old experiment will get you there, any old experiment will get you that sort of short term win. But will it result in sustainable impact aligned with your strategy? Probably not. Fake doors, I think painted doors, they're not going to help your product, especially if you have customers that demand more, or you're trying to use experience as a differentiator.
Gavin Bryant 38:37
Hey, I pushed a real hot button there with painted doors, haven't I?
Adam Furness 38:42
Yes and I know the experimentation purists love a painted door. So yes, I have. I'm a little bit excited about that one, but I'm sure there are times when it's appropriate.
Gavin Bryant 38:56
Yes, hey, let's think about our fast four closing questions now. So these are just four fun closing questions to finish off with Adam. So number one, what's your biggest lesson learned from experimenting?
Adam Furness 39:12
Yes, this is so ironic, given our--- The topic of our conversation is experimentation. My biggest learning is that not everything needs to be an experiment.
Gavin Bryant 39:23
What would be an example of something that has been proposed to be an experiment that shouldn't be an experiment?
Adam Furness 39:31
Something that you have a ton of confidence in, that is just broken, that you should just bloody fix.
Gavin Bryant 39:38
Yes, so if you already know the answer to your question, it's not worth performing an experiment.
Adam Furness 39:45
I think so, most times. I'd say so that there's always a wiggle room, a bit of wiggle room with ambiguity there. But yes, I think so like that. There are some things that we see that are just plain old broken or not as good as they should be, and we have confidence they should be fixed. So yes that's my biggest learning.
Gavin Bryant 39:48
I think that's a pretty good rule of thumb, isn't it?
Adam Furness 39:48
Yes.
Gavin Bryant 39:49
Number two, what's a common misconception people have about experimentation and growth?
Adam Furness 40:15
Well, this is something else that I take personally, because the biggest misconception is that it's sloppy or based on hunches. So whenever I'm working with someone that's on the core product side, I'm never surprised that we actually have a ton of insights and a ton of data to support our decisions, and sure, we can execute those in a leaner way. But we are by no means sloppy. We are super close to quote, unquote data, and very considered with how we approach things. I think a little while back, we're having this conversation that in the past, maybe growth and experimentation, we were all cowboys, kind of like gung ho, sloppy, hacky. But I think now we're actually kind of gone from cowboys to being ninjas, where we kind of sneak behind the camp and make a few changes and kind of sneak back and calmly observe what's going on. So yes, misconception is that growth is sloppy and incomplete and quality is a bit shit. I think that's not true.
Gavin Bryant 41:23
One of the things that you hit on earlier around impact and prioritization is that growth experiment, it's a strategic process like and it has to be that right from our business strategies and our KPIs and objectives. It's just a series of links that exist all the way down through to the experiments that we perform, our hypotheses and everything in between. So yes, I think getting back to ensure that process of growth and experimentation is always a strategic process that's tightly interconnected, and it's definitely not randomized. To your point, it's always a series of experiments, never an isolated experiment. Number three, what's a strongly held belief that you've since changed your perspective on with experimentation?
Adam Furness 41:42
Yes, absolutely, this is probably related to the first which is that small changes are better than larger ones. I think it depends. Obviously the smaller changes you make, you can connect those to the impact. If you change a--- If all you do is change a CTA, and you notice a certain outcome, then you can connect that back. But sometimes you need to reset the experience baseline. And the tradeoff there is that the change you make is going to be less attributable. But sometimes, if an experience is objectively ineffective, it can be better to just reset that baseline to redesign the experience. So, yes, I think my strong held belief that small changes are better has sort of changed over time.
Gavin Bryant 43:11
What's the one key takeaway our listeners and our audience should take from our conversation today?
Adam Furness 43:18
I think fundamentally experimentation teams are trying to create measurable behavior change, and that might be a bit of a reframing for some I don't know, but I think at the center of making that happen is understanding the people who use your product.
Gavin Bryant 43:37
Adam, where can people find you if they want to connect with you?
Adam Furness 43:42
You find me on LinkedIn, and you can find me on furnessdesign.com.
Gavin Bryant 43:47
Awesome if you've got some peers, colleagues or you work in the Asia Pacific region, come along to the APAC experimentation Summit, November 28 it's going to be a blast. It's a hoot, and you can learn more from Adam about good practices for designing growth experiments. Adam, fantastic chatting to you today. Love the conversation, and we'll see you soon at the conference.
Adam Furness 44:15
Thanks Gavin.
“ We have this position at Atlassian that fast and good can co-exist. It’s not one or the other. Things that are really risky, where there are lots of implications on experience changes, need to be treated differently. The bar that we set for ourselves is designing an experiment that is good enough for the user to complete the task, will not do any harm to the experience and an appropriate level of quality to test the hypothesis ”.
Highlights
Growth gives you the tools, mindset and behaviours to de-risk new opportunities - 1). Identify business value 2). Declare assumptions 3). Break assumptions down and prioritise 4). Test assumptions with experiments and 5). Make decisions to drive business and customer value as fast as possible.
A common misconception about Growth is that it is sloppy, incomplete and poor quality. This is not the case - growth is highly considered and data-driven
“Big Bang” website releases - can work provided that the release is supported by a fast follow that can iterate on what you're learning in the market and what you're learning from site analytics
It’s a cross-functional responsibility of growth and experimentation teams to ensure that everyone is invested and scrutinising code that is being shipped. It’s not only the engineer’s job to think about the code they’re shipping, or, the designers role to think about the experience
Problem identification is critical - the key priority of growth teams is to identify and validate meaningful problems to solve within business or strategic scope. Take the time to lean into understanding unmet or latent user needs
Teams don’t understand the current user experience well enough. For established products, analyse current experiences through journey mapping and heuristic analysis to understand the existing landscape better
Product Vision - your product vision is not an end-state or terminal end-point. Rather, your design vision should be like a gravitational pull, helping to understand what good may look like for the business and customer
The core premise of experimentation is to create customer behaviour change that we can measure
Experiment Hypothesis - should be the first thing that you design as a Growth team. A strong hypothesis must be underpinned by a deep understanding of the problem, a really clear scope and also a strong rationale for why you believe you'll have the predicted impact
The FIVE BIGGEST MISTAKES you can make when developing a hypothesis - 1.) Lack of critical thinking 2). Creating a hypothesis that reflects the solution you think you need to build 3). Not framing the hypothesis in terms of desired learning outcomes 4). Hypotheses are treated as “set and forget”, not being refined and adjusted as new learnings come to light 5). Not creating child hypotheses for a parent hypothesis
Atlassian Good, Better, Best Framework. GOOD - should be a satisfactory and accessible user experience for your experiment for the purposes of fast learning BETTER - the solution is iteratively improved and evolved from the initial experimentation experience BEST - the long-term vision for the experience
Experiment Design - should be consistent, coherent and executed to a high degree of finish. Users must be able to derive value from the product experience - they must be able to comprehend and complete their tasks in the product
DESIGN LIKE YOU’RE RIGHT, TEST LIKE YOU’RE WRONG - (A). We would only look to accept the Alternate Hypothesis if we gathered very strong, trustworthy evidence to suggest that we can safely reject the Null Hypothesis. You are never seeking to prove a hypothesis to be right, or validate the hypothesis. We would only ever falsify the Null Hypothesis. (B). Our experiments should always be designed as a satisfactory customer experience that enables customers still extract value from the product and complete their jobs. An experiment is never the cheapest, nastiest, fastest and crappiest experience
How do Designers build confidence around an opportunity? Conduct extensive Qualitative and Quantitative customer research to create a range of solutions to explore through experimentation. Confidence exists on a scale. We should be looking for ways to increase confidence over time
Setting a high bar - FAST and GOOD can co-exist - we should design an experiment that is good enough for the user to be able to complete their task and for it to be accessible. The experiment will not harm the experience and be to an appropriate level of quality and scope that we're able to test our hypothesis successfully
Make Prioritisation simple - Atlassian use the Desirability, Viability and Feasibility framework. Take a portfolio view and ensure there’s a good representation of Optimisation and Path Finding opportunities
In this episode we discuss:
Common misconceptions about Growth
Guiding principles for designing growth experiments
The five biggest mistakes for hypothesis development
Why you need to design like you’re right & test like you’re wrong
Atlassian’s Good, Better, Best Framework
How you should build confidence in a new opportunity
Why you need to set a high bar so Fast and Good can co-exist
The pitfalls of not understanding your current user experience
Establishing a balanced portfolio of Optimisations and Path Finding projects
Why every opportunity should not be an experiment