The Insighter's Club Podcast

Cutting Through the AI Noise

Written by Stravito | Oct 10, 2024

In this episode of the Consumer Insights Podcast, Thor is joined by Aaron Cannon, CEO & Co-Founder at Outset.

 

The secret to leveraging real value from AI in qualitative research is thinking like a user, not an investor—and discerning the data. 

In this episode, we’re joined by Aaron Cannon, CEO & Co-Founder at Outset, as he explores how AI can eliminate the trade-offs between depth and speed in qualitative research to gather richer, more actionable insights at scale—without the traditional time and cost limitations.

We also discuss: 

  • How to avoid common pitfalls in AI adoption by focusing on user needs and real-life applications.
  • The importance of thinking iteratively and adapting as research progresses.
  • How to balance depth and scale when conducting research.
  • The power of mixed-methods research + how AI can enhance both qualitative and quantitative approaches.

You can access all episodes of the Consumer Insights Podcast on Apple, Spotify, or Spreaker. Below, you'll find a lightly edited transcript of this episode.

 

Thor:

Hello everyone and welcome to the Consumer Insights podcast. Today, I'm excited to have an amazing insights leader joining me for what I know will be an enlightening conversation. I'm thrilled to introduce today's guest, Aaron Cannon, co -founder and CEO of Outset, an AI moderated research platform backed by Y Combinator. Outset is a new startup looking to revolutionize the qualitative research landscape by harnessing the power of chat GPT to conduct in-depth interviews at scale. Thank you so much for joining me, Aaron.

 

Aaron Cannon:

Thank you, it's awesome to be here.

 

Introducing Aaron

 

Thor:

Okay, well, I'm very excited to have this conversation with you today. And I would love it if you could take a couple of minutes to tell us about yourself, your role, your company, how you got to where you are today. How did it all begin?

 

Aaron Cannon:

Yeah. Well, as you mentioned, I co-founded a company called Outset, and I'll tell you more about that. my career started, I was actually kind of an accidental researcher.

I joined an innovation consulting firm called Doblin. It was the firm that wrote the 10 Types of Innovation, which some folks may have heard of. And it was then later acquired by Deloitte. And in that role, I got to be a kind of, you know, somebody who knew very little about this world to somebody who was very immersed in it.

And we did deep qualitative research for new concepts, for new services, everything from ethnographic in -home to shop -alongs to large -scale, even quant. And that is really where I cut my teeth. But then I decided I wanted to go build stuff and not just be on the kind of advisory side of things. And I've spent the last 10 years or so leading product and design teams. And so that's where I've been more in the digital space as well as some hardware, places like Pebble, first smartwatch company, Tesla, and then more recently several kind of enterprise startups. And through all of that, my job has been to build stuff people actually want. And that is always based in research. I think that through that career, I've felt both the kind of joys of successful research and the struggles of the limitations of today's methodologies. And I think that's where the kind of Genesis for me was, was like, hey, I want to better understand what people want so I can build the right thing. I can build the most incredible solutions and products and services. And it felt like the methods that we had to pull from the toolkit was incomplete. And what I mean by that is like, you know, we, when we have a meaty question, we're like, we need to to answer this thing. We are always left between a trade off of do you want depth, or do you want the speed and scale? And that is, depth is doing good qualitative research. can be weeks of time, if not months. It is deep and thoughtful, but it is also expensive and time consuming and has a lot of limitations. And then you go to the quant world, like, great, 5 ,000 people, no problem. And you just scale up. You can get it done in a couple of weeks or even less. And it's much more cost effective. And it just seems like, why are we putting ourselves in this trade -off constantly. And I think that trade -off has been just the factor of what is our technology capable of.  I think about it like the internet was to quant as this moment of AI is to qual. Right? The internet opened up the ability to scale quant, but with AI, we can now open up the ability to scale qual. And so with that, we built the first AI moderated research tool just over a year ago, where we are putting large language models in the position of actually conducting and moderating conversations with participants.

 

Insights definition 

 

Thor:

Wow, there are some highly quotable sentences there. So really appreciate that. Now, as you know, we love to start with the basics. And one of those things is the definition of an insight. So how would you define an insight and tell us having done what you've done and, you know, having gone through the journey you've taken, how has that definition changed over the course of your career?

 

Aaron Cannon:

Yeah. I think early in my career, what I would have told you was an insight is a key finding. It is something I have learned from some set of data or some set of kind of discovery. And I think that, you know, that's not wrong, right? It's still true to some extent, but I think as I've kind of progressed in my career, been in positions of more kind of decision-making and kind of control

I find that an insight is actually an action. Insight is, of course, it is grounded in what your findings are, what you have learned, but an insight is really something that can or should ultimately be an action.

It should ultimately be a decision that you're making. And all too often, I think, in the world of research and insights, we say, do research and here are our findings, and we stop there. We let somebody else take that into action. And the reality is a great insight is an action.

 

Thor:

God, this gets getting off to such a good start. Now, when we were preparing for this episode, you mentioned that you disagree with some of the more extreme views that people can take towards AI. That it's either going to revolutionize the industry overnight or it's worthless. Can you tell us a bit more about your thinking here and help us understand how you got to that viewpoint?

 

Aaron Cannon:

Yeah. Well, OK, I think when we started, my co-founder and I playing around prototyping stuff, started building on it, it was actually pre-ChatGPT when we started collaborating. And we haven't started the company yet. But ultimately, in that early stage, AI was, hey, this is a cool new thing we can use, a tool, capability. Then ChatGPT comes. The world blows up. And then every kind of a there's the classic thing of all the crypto bros became AI bros, right? Like everything in the tech world became all about AI, which created this inflated sense of the world has changed overnight and the reality is always somewhere in between. It's not worthless. There is real capability there. mean, Outset would not exist without this power and our customers would not be able to do what they do without this. And at the same time, life goes on as normal, right? Like this is, you know. The weeks continue to go by, changes often incremental in different places. So this really came through. I was at a conference last fall. And I remember every single session was about AI. mean, every session. This is kind of one of the traditional Insights conferences. Everything was about AI. And only one session out of all of them said anything practical. So why is that? Everybody thought this is the moment to talk about AI, but they did nothing to say yet. Right? Because it wasn't anything to do yet. People hadn't figured out, the chips hadn't fallen, there wasn't a clarity of what this meant for the insights community. And so people were going on about synthetic users. That might be a thing that matters at some point. It's an interesting side conversation we could have. But no customers out there interviewing or surveying synthetic users instead of people, certainly not last fall. And so it's like, why was everybody talking about it? Because it sounded cool. It was a very investor -like pitch. And I think that that frustrates me, which is funny because I'm an AI founder, right? But that frustrates me because it makes everything AI seem kind of jargony and not real.

 

Moving beyond AI jargon

 

Thor:

Let's double click on that jargon, you know, topic. you, because you did also mention in that call that you think everyone needs to stop listening to AI jargon and the platitudes about the future. Could you give us some examples of such platitudes and help us understand how you believe we should think instead?

 

Aaron Cannon:

That's right, so I think we should all think like users. Think like the people who would use something in a day on a Wednesday and feel like, man, that was productive. That's how we should think. Most of us are not venture capitalists. If you are, great. Maybe think about these massive trends over five -year time horizons and where you want to put your money. But for the rest of us, think like users.

So platitudes are at the highest level talking about the revolution we're going under revolution, which is actually a very loaded word, and maybe we shouldn't use it all the time with respect to technological improvements.

But even at a lower level than that, talking about the future of synthetic data, there is some interesting things happening there. But that is not something I, as a user, have a lot of reason to go adopt tomorrow. There's also a lot of just adding AI to everything. Just because you use one AI bit in your, you have a, of your word cloud now, you're doing text analysis counting with chat GPT, it's like, cool, but you can't rebrand your entire company's AI and then expect people to take you seriously. And so I think it's like, we should be users and we should be discerning about what we are talking about, what we're debating, what we're calling important AI progress. And I think I really respect when people say, in the one practical session I saw in that conference, we used chat GPT to come up with some cool concept ideas. Great. That's honest. That's real. That's user did that. And the next day, they saw value out of it. 

 

Thor:

So let's, let's kind of piggyback on that for a second. So you, you mentioned that one cool presentation and how they suggested, you know, that you could effectively use it to come up with some concepts, but let's talk about the bigger theme. So if we continue on the theme of change, but shift the topic slightly.  Now I know that you think traditional qualitative methods could use some updating. tell us, tell us more about that.

 

Aaron Cannon:

So kind of what I said towards the beginning, which is that qualitative research is deeply important and thoughtful and critical to what we build. And it has always also suffered from the limitations of technology. It has suffered from the reality of human time. We have 24 hours in the day and we are expensive and we have our own needs. We don't want to be up at 3 a to interview somebody across the world. So those limitations are reasonable. They're fine. They're real. But we can now break down that limitation and expand what qualitative research can do. So I think of qualitative research as the updating it needs is I think it needs to be more iterative. I think it should be more mixed methods. And I think we should raise the bar of our expectations for what it can produce. So I'm happy to unpack those a little bit. 

 

Thor:

Why don't you tell us, you know, what benefits do you see with having larger sample sizes and qualitative research? Are there any downsides to taking this approach?

 

Aaron Cannon:

Yeah. That's right. So I think the mixed methods thought is sometimes you want to have some very deep exploratory conversations. And then sometimes you say, you know what? I just need to understand that qualitative data. I know the questions I want to ask, but I want to get deep with a lot of people. So I guess your sample size. Why is that important? Well, why do we run surveys? A sample gives you the ability to cover multiple segments, to cover across markets. to learn what you would learn in a dozen interviews, but to replicate that five times in five different areas that are important. And so I think scale is the ultimate partner to depth. It's like depth is how do I get Thor in your head to understand your needs? That's depth. And then now I'm like, well, I'm not building just for Thor. I'm building for Thor in five different countries, in five different segments, in five different markets. I multiply five three times. That's a lot of people. How much can I extrapolate one conversation with Thor across all of the people I'm building for? So it's not to say that scale is always the thing you need in every moment of research, but that's the mixed methods. That's where I think a great qualitative project may have some deep one-on-one, you and I in person, in your home, conversation about your life, and then we can marry that with scale, right, where we're actually going deep with dozens or hundreds or even thousands of people.

 

Thor:

I like that. I like that. And if you, if you reflect a bit on kind of your career, people's, know, things you've done in the past and you, if you think if you could give yourself almost like going back in time and you could give yourself those tools, what is it that this would have enabled you to do? And ideally, if you can tie it to something very specific, like a specific moment in time or a specific part of your life.

 

Aaron Cannon:

Yeah I think there's a very quick example. So when I was leading product at Triple Byte, which was a recruiting tech startup that served Fortune 500 companies and helping them find engineers, we did a project where we really, really reshaped our product. And we knew we needed to gather deeper, thoughtful insights from our users. We did, I think it was about a dozen interviews with engineers. And we made a lot of decisions based on that. And those engineers gave us thoughtful answers. But what we later realized is those engineers were not appropriately representative. So when we went out and did research, we tended to target these very kind of hireable engineers, very senior folks who are saying, hey, my challenge is I've got all these interviews, and I can't juggle them, and I don't want to do all these tech screens, and I want to go straight. It's kind of these like, Champagne problems, right? We were talking to them and, we built for them too. But what we had overlooked in that was the fact that the vast majority of engineers struggled to get a job. It's a funny thought because we all think of engineers as one of the most hireable, you know, but the reality of any labor market, there's 80, 90 % of people are looking to get ahead, looking to advance and are not necessarily struggling, but, you know, the challenges they face are not too much interest in them, right? The challenges they faced were different. And so we wound up, after having learned from the first set of interviews, wound up having to pivot our strategy to say, you know what, we need to also build a lot more to solve the problem or to help those who are really struggling to get those interviews. And so I think that was an interesting kind of moment where the sample size of qualitative data, the cost of that, gathering that qualitative data, we couldn't have kept doing that in that moment that that really held us back.

 

Embracing an iterative mindset

 

Thor:

That makes a ton of sense. Now, in the very beginning of this interview, you mentioned the idea of adopting more of an iterative mindset. Now, I know you're quite a big advocate for considering more iterative approaches. Could we take some time to expand on this a bit?

 

Aaron Cannon:

Yeah, mean, iterative is a funny word. It basically just means learning quickly. So I think there's an old world, a traditional world of research, where you have phases of a three -month project, maybe a segmentation, where you're going to go do a bit of qual, you're going to do quant exercise, and then you're going to publish this long report, get some feedback on the slides, and then send your final output. And I think that like that's usually staged in such a way that's like, get this data, then I get this data, then I package it together. And the problem with those approaches are that it kind of ignores the fact that you might be surprised in your third interview, but because you've already scoped out the whole project, it doesn't give you the flexibility to continue to build on that learning. And that is partially because, again, I feel like I keep coming back to this, it's partially because of the limitations of our methods so far. If you're going to do a deep qualitative, I'm thinking back to when I was at Dublin and we did a long, deep qualitative study with a client who was building the turbo tax for getting into college. So it very cool product, very cool idea. We were going deep with 16 -year -olds and their parents and what their journey was. we just had this whole thing scoped out. We realized a four -week project is like three weeks in the field. Maybe it more like two weeks of synthesis, and then we're going back to the client. And then there's a phase two. And what we learned, know, like I think it was like the second week we had an interview and a lot of this was about social sharing. This was 10 years ago, social sharing of your progress. And and the six year old looks at me and goes, there's no way I would, I would share this with my friends at school. I get my ass kicked. And I was like, wait, yeah. Like some kids, you know, especially in I think he was kind of a lower income part of South Side of Chicago. He's like, this is not, that's not for me. I don't need to socially share this. Like, I want to do this for my family. I want to go to college. I want to have a great experience. But the problem is we'd already scoped out so much of that work that it was very hard to go back and pivot. Right? And so like my approach, it's a bit of a longer story, but my like, my, what I push for there is like, how do you give yourself the tools and the open ability to learn from it?

Change your plan, change your methods. And I think that in qualitative research, we just don't have iterative tools enough.

You have the time, you have the scheduling, you've scheduled it three weeks out. And I think what we found is using AI, for example, to conduct conversations. I can send 10 of them today, learn something interesting, and send 50 tomorrow. And then I can send another 10 the next week, and then can send 500 the next day. And I think that ability to iterate and adapt is something that has been missing in the qualitative practices.

 

Thor:

I fully see where you're coming from. think that's really interesting and super powerful. Now, how would you recommend that listeners go about taking a more iterative approach in their work? What's step one? What's step two here?

 

Aaron Cannon:

Yeah, so I mean, think in terms of listeners are probably either working in -house somewhere on a research team or they are working in a consulting firm. And I think the answers are different. I think if you're doing this in a consulting kind of approach, then the question is how do you package up a project that has flexibility? I used to be a consultant and I've worked for clients and I know that that is a challenge because they want to know everything upfront. Not to mention they may have their own hypotheses of what you're going to and I think structuring iterative phases where, this is the general plan, but here are these checkpoints in a project where we can readjust. Here's where we can revisit the plan. And here are the tools we can use to do that. So I think just taking that scoping approach of assume you're going to be surprised, and then how do you build that project? But that kind of is from a scoping standpoint. I think from a practitioner standpoint, especially in -house. I think the question is how do you have your stakeholders and your own team expect surprises and have the tools to iterate quickly? And I think AI, I'll avoid all platitudes to be consistent with my push here, I think AI has opportunity to help you iterate. And that means you can do synthesis faster in a lot of ways, and you can gather qualitative data faster. And speed is like the kind of necessary foundation for iteration. So if I can, for example, run 100 AI moderated interviews on a Monday, and I can get the AI to synthesize it, which is obviously all stuff Outset does, but I think there's a lot of, I don't want to be too self -promotional here. There's a lot of tools that can do this kind of stuff. If you can absorb that data, learn from it, and say, whoa, there is something interesting happening here. Let's go have three ethnographic style in-depth interviews. OK, we've kind of identified the weird tension here. Now let's run another hundred or a couple hundred AI moderated conversations to go expand on that. And then, hey, maybe we want to run our traditional segmentation survey the next week. Now we can build in all of these qualitative learnings to that. So that's an example of how you might use iterative AI tools to actually learn faster. So that's what I would encourage.

 

Thor:

That's such a, such good advice. And are there any obstacles that you think we should be mindful of? You know, as are there any things we could do to preempt these obstacles?

 

Aaron Cannon:

I think the, you there's one obstacle, is your stakeholder. But I think if you are more iterative and can use tools to make you faster, then your stakeholder will ultimately be happy. Right? So the obstacle is stakeholders have expectations. They have expectations of methods and outputs and pushing back against those is difficult. However, I think now happens to be a moment because of all the platitudes out there, because of the world of AI is so prevalent that I think you can absolutely push them to think in a different way. Think more iteratively, think more speed oriented. I think that that is an obstacle that you can get over. The other big obstacle, I would say, is to our earlier discussion about the other obstacles, AI is everywhere, and that means it is very hard to sift through the noise. And I think that is kind of if there's one takeaway here from this conversation. It's think like a user. Think like, what can I use tomorrow in my day-to-day work that will just make my life better? And if you think like that, and those are the questions you're asking any vendor who's trying to tell you about their stuff or their services or their tools, you can think like that, then I think you can do a great job finding the things that make you more efficient.

 

Thor:

It's very simple, but very powerful advice. Now, I think you've shared some really insightful learnings with us today, Aaron. And if you had to summarize, and you kind of did, what's the one big takeaway you want listeners to get away from this episode? Is it the thinking like a user? Is there anything you want to add to that?

 

Aaron Cannon:

Yeah, think what I'll do is I'll summarize my thought here. If there's one thing that you take away, I'm an AI founder, so we're talking about AI, right? Like AI and research is very exciting. There's one thing to take away is think like a user and be discerning, right? Don't accept everything as, my God, AI has changed everything. Don't ride it off as, it's a hype cycle. I think look at it like a user look at it like somebody, what can I do tomorrow with this new technology that will make my life better? I think that is, that's probably the best kind of, the best advice I could give.

 

Insights lunch guest

 

Thor:

Very, very good advice, Aaron. Now we've come to the end of this recording. And as you know, there's one question I have to ask you, which is who in the world of insights would you love to have lunch with?

 

Aaron Cannon:

I think that's a tough question. Yeah, there's a lot of folks across different disciplines that I find very interesting. But I, know, this will be a little, you know, just to my own past.

I would love to have lunch or have had lunch with Jay Doblin

Who founded the firm that I was at, who was a pioneer in a lot of the kind of how do you bring insights and marry it with product and design? And I think I love that because, you know, as I've gone through my career, as I kind of mentioned earlier, I've become more pragmatic. Insights is not an end to itself, but rather something that drives us to build better. And I would love to kind of better go to the start of that, of like, do we bring research, design, product all into one conversation?

 

Thor:

Wow, this has been such an amazing conversation, Aaron. Your perspective on insights is truly noteworthy and I think we can all learn from it. Now, before we end today's episode, I'd love to return to some of the moments of our conversation that really stuck with me. When I asked you about the definition of an insight, you told us that early in your career, you would have said that an insight was something you've learned, a core understanding. Now, however, you believe in insights is actually an action. It should ultimately be a decision you should be making. You reminded us that qualitative research is deeply important, but has historically suffered by the limitations of technology. We have historically not had sufficiently iterative tools, but with AI, we are now empowered to iterate and adapt at a new level. And lastly, when talking about AI, you encouraged us to think like users. Be discerning. What can I use tomorrow that would help me do my work better? Most people are not investors looking to capitalize on the macro changes AI will enable over the long term. Thinking like a user will help you get a more crisp and balanced view on things. Now I know I've learned a lot from talking to you today, and I'm sure our audience has as well. Thank you for joining me today, Aaron.

 

Aaron Cannon:

Thank you so much. It's been a pleasure.