Flipping opinions, naming behaviours and untangling your emotions. Most ventures start with ideas, but then, once you get going, adding every idea to the backlog is how you strangle your company.
As you make progress, it’s super important to test every feature idea before you put time into building it, so we asked an expert to come show us how to stop trying to build everything…
Hilde Franzsen, Brand and Marketing Director at Inkblot Design, has been working at the intersection of brand strategy, UI/UX and digital product development for years, and she’s seen SA founders make this exact mistake.
She calls the antidote a UX hypothesis…
The move: write a hypothesis before you build anything
Opinions like "Let's add an AI chatbot to our onboarding" sound great in a product meeting, but does the customer actually want it? The trick is to write it as a hypothesis that can be tested, for example:
"We believe that replacing the onboarding FAQ with a conversational AI assistant for first-time users will increase the percentage of users who complete setup within their first session."
Same idea, different footing. The hypothesis can actually be tested.
How to really turn a feature idea into a testable hypothesis
1. Start with the opinion, then flip it into a hypothesis
Every feature idea starts as an opinion; just write down exactly what the team wants to do, "let's add X" or "let's change Y". Then restructure it using one of two templates, depending on the scale of the change.
For a larger feature or flow change: "We believe doing [X] for [these users] will achieve [this outcome]."
For a smaller change, a button label, a CTA placement, a copy tweak: "By making [this change], we believe we will [increase/decrease] [this metric] because [this reason]."
The hypothesis forces things the opinion skipped: Who the users are, what behavioural change you expect, why you think it might work and how you’re gonna measure if it does.
2. Name the behavioural change, not the feature
"We believe adding a dashboard will improve user experience" is not a hypothesis, it's a feature description with the word "believe" in front of it.
A real hypothesis names a specific, observable change in how users act. Not "improve experience", but "increase the percentage of users who complete setup within their first session." Not "users will like it", but "users will act on a specific recommendation."
Hilde's rule of thumb: If you couldn't sit five users down and watch whether the hypothesis is true or false within a single session, you haven't named the behaviour specifically enough.
3. Check your own attachment before you proceed
Features have a psychological trap: You spend weeks building it, so you get emotionally attached to it, making it hard to analyse objectively.
With the UX hypothesis, you protect against your own bias: write down what will be tested, test it and move on.
4. Pick a cheap test before you write a line of code
Once the hypothesis is written, test it with the least effort possible:
Watch five people use it. Have them share their screen on a Zoom call and ask them to think out loud.
Change one thing and measure. Swap a CTA, adjust an onboarding step or rewrite a label. Track one metric for a week before trying anything else.
Run a Wizard of Oz test. NB for AI features: Before you deploy the AI or write a single line of code, do what the AI would do, manually, for five real users. Pretend you are the AI and engage with them. If users don't engage when a human delivers it, they're unlikely to engage better with an AI.
5. Make the decision based on what you observed
Did the behaviour change the way you expected? Measurably? Only if your data says yes, build. Otherwise, don't build it and move on.
Why this works in South Africa
Most SA product teams are small, one or two founders doing the work of five. There's no dedicated UX researcher, no A/B testing infrastructure or budget for a research lab. The hypothesis framework is designed for exactly that constraint. Simple, cheap ways to test before building.
Secondly, the cost of a wrong build in SA is higher than it looks. When runway is measured in months, not years, and engineering time is either your own or a contractor you're paying out of pocket, a feature that ships and doesn't get used is a financial storm waiting to happen.
"A beautiful product that solves the wrong problem is just expensive decoration." Hilde Franzsen
Want the full playbook?
This post covers one habit from Hilde Franzsen's masterclass, From Solutioneering to UX: How to Prioritise Designing the Right Things, available in full inside The Founder Collab. The full session goes much deeper, here's what's inside:
How to diagnose whether you're a serial solutioneer, and what to do about it
How to write a value hypothesis that tests whether your product is worth building before you commit engineering time
How to find your Core Value Action, the single moment your product first delivers real value to a real user
The psychological traps (confirmation bias, sunk cost, endowment effect) that lead smart founders to build the wrong things
Three cheap testing methods any SA founder can run without a research lab or a big budget
The Founder Collab has 40+ masterclasses from SA's best operators across sales, UX, fundraising, paid media, automations, and more. Join The Founder Collab to access the full session.




