The work of taking on incumbents alternates between two confounding convictions.
That a startup can convincingly puncture the something-for-everybody premise of industry giants while juggling the detours and grinding repetitions of finding a super narrow, underserved segment which offers eventual leverage to expand and reach/surpass said incumbent’s scale. Phew.
— How Butter is mounting a challenge for Zoom, Google Meet, and others
— Redrafting the standard PMF survey (and how frameworks vary in practice)
— Deploying pricing as an instant, penultimate PMF validator
— Why founder/team-market fit can prove especially decisive
This is traditional startup wisdom. If you try to do everything for everyone, you’re going to fail. With Butter, we were trying to take on Zoom. A formidable opponent given their scale and reach. So we knew that we had to focus from the beginning.
A bit of the backstory here is that before we started out with Butter, we were doing online workshops ourselves. Having been in consulting and having run a digital marketing agency, I’ve personally had to do a tonne of workshops over the course of my life.
And I could just feel that doing workshops remotely was very different and way inferior to doing them in-person, because of two primary reasons that we’re addressing with Butter.
Firstly, the technical overload that the facilitator had to bear. Taking their focus away from facilitating the workshop. And secondly, the lack of energy levels among participants in an online setting when compared to physical sessions.
We definitely saw something there.
We knew that workshops presented a decently-sized market and also that it was limited to those in advisory services. Agencies. Consultancies. And the likes.
What we realized after the first 6 months or so of doing user interviews was that there were way more people conducting workshops for many different reasons. In fact, our definition of a workshop had to be revisited.
Aside from client/consultant workshops, there were coaching sessions, internal trainings, product design sprints, and lots of other types of remote sessions. Workshops turned out to be this huge use case that had many different names.
Our research conversations also made it perfectly evident that generalist tools such as Zoom or Teams or Meet, were not fulfilling the needs of people running these workshops. Which really became our original motivation behind founding Butter.
So, again, why was it important to find a beachhead market? Well, if you are trying to attack everything, then the existing, generalist players will win. But these same players won’t really care for smaller markets or niches that have very specific requirements.
That’s the beachhead.
A market you can approach first and then, more importantly, expand further from. Exactly why Butter is so focussed where we are. This is the only way we can win against incumbents.
The way that we think about workshops as a beachhead use case and a market, is that we see three types of synchronous sessions happening in the present and the near future.
First, huddles. Stuff that happens on Slack, Tandem, and similar products. The quick, ‘hey, how are you doing, let’s just hash this thing out,’ sort of conversations. Often taking place in small groups. Often ad hoc.
The second type are status meetings or planned meetings such as stand ups or 1-on-1s.
The third are more collaborative meetings. These are very different. Quite stretched in terms of complexities. On the low-complexity end of the spectrum, you’ve got brief brainstorming sessions where, say, people design something together.
On the most-complex end, you’ve got workshops and deep training sessions, stuff where you are often dealing with larger groups and you’ve planned ahead extensively. These are the ones that make for our beachhead. That’s what we’re targeting first.
The idea, then, for a beachhead is to allow us the leverage to move downwards in the market of collaborative sessions. And over time become the dominant synchronous platform for all kinds of collaborative sessions globally.
If we can solve for the most complex collaborative sessions, we believe that we will be able to solve for all collaborative sessions.
On top of the clarity this gives us on how to approach GTM, the inputs on product direction and the speed and focus with which we can pursue it, is phenomenal.
The first step was simply doing a tremendous amount of interviews and structuring our thinking based on those interviews.
We interviewed everyone. Well, we reached out to anyone on LinkedIn, who was doing anything that could be interpreted as a workshop. People that called themselves facilitators, consultants, trainers, and so on.
We spoke with them to try and identify the problems they had while running their sessions. And as this was right around the time when COVID struck, basically everyone was suddenly switching from physical workshops to remote workshops.
We aimed at identifying people who were having the biggest issues with running remote workshops on say, Zoom, and then we created different segments based on that.
Segmentation seems super simple when you’re deploying certain frameworks, but in reality, there are so many gray areas.
You can’t just say, ‘oh, these people are doing workshops, these people are doing trainings, these people are doing sprints.’ There’s always an overlap. Then there are people who’d merely describe these sessions very differently.
My point is that you can’t use pre-defined structures or frameworks to do that final leg of segmentation. Another nuance here is to focus on the similarities between the various segments you’re discovering, instead of the differences (this is especially true for building products — perhaps a little less so for GTM).
So we took the ~700 odd interviews we had conducted and placed them into buckets of similarities. Much easier said than done, of course.
That was the first step of segmentation. Attempting to figure out:
- ‘What similarities do we find between these various users,
- and in which buckets do users have the biggest challenges with the toolset that currently exists?’
Then those were the ones that we aimed for. The ones that were doing slightly more complex workshops. Complex yet often built on very specific frameworks where they’d repurpose a lot of stuff, and were finding it very difficult working with a generalist product.
We introduced the PMF survey a year or so down the line. We’d send these out when a user had had at least two Butter sessions where they had hosted four or more people; which was the smallest gathering that could be construed as a workshop.
They’d receive the standard PMF question. ‘How disappointed would they be if they could no longer use Butter?’ Responses being: Not disappointed, somewhat disappointed, and very disappointed.
The percentage of people choosing ‘very disappointed’ would make for our PMF score.
On top of this, just like Superhuman, no matter the selection, we’ll request them to explain their respective choices with some more details. This was really helpful.
And we learned a lot.
But again it’s worth speaking of frameworks and how they pan out in practice. We learned that the formulation of the PMF question (‘how disappointed would you be if you could no longer use Butter?’) was fairly complicated for native English speakers, especially so for non-native English speakers.
Although now, the US is our biggest market, it wasn’t at the time. And when we reached out to survey respondents for the whys behind their responses, many power users wrote back saying that they had the opposite in mind.
Thus we ended up with a lot of inaccurate data. The follow-ups definitely helped us get some sense of what users really meant. Then we went ahead and tweaked the survey question itself. Leading with, ‘If you couldn’t use,’ instead of ‘How would you feel if…’
Resulting in: ‘If you couldn’t use Butter anymore, how would you feel?’
This isn’t perfect, but the responses are way more valid than with the original.
We wanted to price rather early.
But we observed that in the video conferencing space, there’s a reasonably high product hygiene threshold that we needed to hit in order to compete with the entrenched players. So we didn’t introduce Butter’s pricing until a little over a year after we launched.
We launched in June/July 2020 and the pricing came out in late October 2021. A major goal behind the pricing push was to learn which market had core users who were willing to pay for Butter. And this goal stemmed from a (soon to be validated) suspicion we had.
In that first year, we had seen big growth and adoption spikes from the education segment, in countries such as Taiwan and Brazil.
These were power users who were extremely vocal about how much they loved Butter. I should note here that we hadn’t built Butter for academic use cases. This was an emergent segment, with perfect PMF scores to boast, not to mention the excellent product engagement metrics.
Still, our doubts were twofold: 1) whether they’d be able to pay for the product, and 2) will they continue to use Butter post COVID (once lockdowns in their respective countries were over and they had to resume classroom sessions)?
As soon as we introduced pricing, the answers to the above concerns became very clear. Most of these users weren’t willing to pay for a premium product and were planning on moving to classrooms once the COVID restrictions were lifted (and they indeed stopped using Butter).
We would have been in for an unexpected shock if it wasn’t for introducing pricing. I truly believe that any PMF validation exercise is incomplete without the pricing test.
The fact that we understood the use case incredibly well was huge.
Not just co-founders, but a bunch of folks in our product and commercial teams, in general, had had direct experience with remote workshops. There was a solid sense of team-market fit, not just founder-market fit. And I’d grade it rather high in terms of importance.
We’ve seen other competitors try to go after this market, but it seems clear at least from the way that they design and build products, that either they haven’t been going deep enough into the years of research available on this particular domain, or they simply do not have a fundamental enough understanding of what the use case is.
Because when you do have that understanding, you can go in and have conversations with potential users as equals. So you’re not starting with, ‘oh, yeah, I would like to understand this use case, could you please tell me more?’
You can instead bring anecdotes of your own experiences. And make users see that you really get their challenges. Which allows you to go much, much deeper in your research, faster.
— Crossbeam’s co-founder, Bob Moore, on the benefits and trade-offs of niching down
— Wethos’ co-founder, Rachel Renock, on upending status-quo pricing to serve expanding segments
— Pipedrive’s co-founder, Timo Rein, on founder-market fit and Pipedrive’s SMB choice