The single stream of gradual, x-percent-a-month, planned product improvements has its allure. Something early-stage startups aspire to build towards. A sign of scale.
Since folk’s inception, they’ve been attempting to tackle: what does it take to tend to incremental changes and make new, breakthrough product bets with a similar level of preparedness and consistency?
In the following exchange, Simo details a framework that has led them towards a meaningful balance:
When building a product, there are a million things you could be doing. Improving the details, solving painful bugs, working on the infrastructure, etc. The choices can bring in immense overwhelm for lean, early-stage teams.
Are improvements the best way to spend scarce engineering time? Or should we build on new bets that can be game-changing for the business? Because we can’t get to hone our product-market fit by the way of bug fixes and small improvements, alone.
So we’ve tried addressing this challenge based on the three key things that, we believe, should matter most to any startup:
- Rhythm. We think that iterating faster will help us stay ahead and potentially win the market.
- Focus. We want to be focussed on a few things at a time. Of the million things we could be doing, we want to aim at a few. And do them really well. Inculcating rapid execution as a habit.
- Priority. Being focussed helps, but how do we make sure we’re focussed on the right things? That’s why user research and discovery (more in these, below) are so critical in order to prioritize the highest value projects.
These beliefs have informed folk’s approach to project management, what we’ve humbly named, The Cadence . A framework that allows us to regularly deliver sizable changes to the product.
For a project to fall into the cadence, it needs to be a high-impact bet. Something absolutely new and ambitious. Not a refinement to existing functionalities.
And when framing such projects, we tend to make arbitrage based on the triangle of constraints: time, resources, and scope. In order to maintain a high celerity and rhythm of testing new ideas, we allocate projects where we’re fixing 2 of these constraints.
We’re choosing to fix time (always, very intense, 3 weeks) and resources (often 2 teammates on the project) and letting the project team choose the scope based on what is pitched initially.
This translates to the following practical implications:
- We deliver at least one major increment for the user every 3 weeks
- No projects shall be longer than 3 weeks, leaving the tech team to make hard decisions on the MVP
- Product and design teams will be in charge of centralizing all assets so a project can start in a Notion note
- Each dev alternates one cadence project with one bugfix session
- Each project must start by a tech scoping phase: deciding on what we’ll commit to and what we’ll say no to across the 1st, 2nd, and 3rd weeks. This helps align expectations, take ownership, and clarify steps.
- Devs should be able to start from a clean product context (analytics, clear wireframes, prototypes, user stories, etc.)
- And in the overall scheme of engineering schedules:
- 1/3rd of our time is spent on cadence projects that could change the face of the product
- 1/3rd of our time is spent on improvements projects
- 1/3rd of our time is spent on bugs, minor improvements, quality, and tooling
At the end of each cadence, we implement tracking using Segment. We identify some hypotheses that we want to validate for a given cadence project. For instance, how much should it be used, at what frequency, and so on.
And we let our users try it out for a set no. of weeks. And then for the next iteration of a project, we do a quick summary of our findings.
The chief constraints are around switching from one project to another. Cadence projects are deliberately structured without a cooldown period. So the transition is sometimes pretty challenging.
And what feeds these intense projects is equally intense and continuous research.
We are really serious about product discovery at folk.
And there are three key ways in which we pursue it:
First, analytically. Basically, tracking all major events in our product, and knowing what works, and what doesn’t work. This wouldn’t tell you what to build next. But having a sense of existing user behavior does inform the direction of new projects.
The second way we do discovery is through live sessions. We observe our users going through the product during onboarding. Documenting where they’re clicking, the product paths they’re following for particular jobs, and points where they get stuck.
These calls don’t just alert us about UX issues and opportunities, but also allow us to offer a one-to-one service where users can customize their own accounts. In addition to this, we also record all user sessions through Fullstory.
The third aspect of user research discovery is about really understanding the pains, the hard challenges in their own words. For this, we do true user research. Talking to current and past users about their use cases, gathering their thoughts on how well/badly we’re solving for that usage. This research fundamentally dictates what we build next.
And we measure the overall outcomes of these research insights through Superhuman’s product-market fit framework. We send out the PMF surveys — How disappointed would they be if they could no longer use folk? — exactly a month after we’ve onboarded a new user.
What matters to us is reaching product-market fit and more specifically targeting world class engagement (how many times the user uses the product in a month) and retention (how long do they use the product).
So, exactly like our cadence projects, user research never stops. Basically, every new user that’s coming into the product presents an opportunity for learning something about the problem. And we’ve automated everything.
As soon as they take a certain action, we send them an email, inviting them to either an onboarding session or a user research session. I conduct a few of these every week, and since starting folk, I must have spoken with around 500 users.
Based on this research we’re continually discovering the most impactful projects we could be working on. And we prioritize these projects based on a simple framework of balancing internal effort and potential impact on the user pains we’re trying to address.
That is how difficult a challenge is for the user or does a particular improvement make the product stickier and then how confident are we in tackling it? Are there things we need to deep dive on? Do we understand what specifically needs to be done?
For each project, we consolidate data from all the research we’ve done. Which is then turned into wireframes, prototypes, and early designs. To ensure we’re completely execution-ready and everything is clear before kicking off a new cadence. And the same set of learnings help us sequence our backlog.
We keep a manageable backlog. We don’t want it to be exhaustive. We make sure it is prioritized instead and the most important issues are on top.
In order to know the improvement projects and bugs we work on, they are all placed in the backlog arranged with a very simple prioritization framework:
- Priority 1: reliability, a user should be able to rely on folk. Anything that jeopardizes the platform should be fixed asap
- Priority 2: experience, a user should enjoy using folk. We remove all pain and friction points in the order the user faces them
- Priority 3: helpful, folk should support the user in his/her use cases
We review these priorities regularly. Investing in planning at the beginning of quarters, reviewing the roadmap weekly, starting Mondays by assessing key objectives.