Two of the most common UX evaluation methods are heuristic audits (an experienced practitioner walks the site and applies a recognised heuristic framework, usually Nielsen's 10) and usability tests (real users attempt real tasks while you watch). Both are valuable. They surface different things. The question we get asked most often is: which one first?
Short answer
Heuristic audit first. Usability testing second. Almost always.
Why heuristic audit first
Heuristic audits surface the issues that an experienced UX practitioner can identify on sight: pattern violations, accessibility blockers, navigation inconsistencies, form-design mistakes, content-hierarchy failures. These are the issues you'd be embarrassed to put in front of users — they're "I should have known better" issues.
If you skip heuristic audit and run usability testing first, you spend €5,000+ on real users only to surface the same obvious issues a €1,500 expert audit would have caught in two weeks. Worse, you've burned your participant pool: those five-to-eight users have walked through your interface once, and the second wave (after you've fixed the obvious things) has to be a different cohort.
Heuristic audit first cleans the deck. By the time real users come in, the obvious failures are gone and the testing focuses on the genuinely uncertain — the parts where an expert wouldn't have predicted what users would do.
Why usability testing second (and not just instead)
Heuristic audits have a specific, well-documented blind spot: they catch what an expert can predict, and miss what an expert can't. Usability testing surfaces the surprises:
- The button that everyone confidently designs as "Save" but every user reads as "Submit and end."
- The icon that's universally understood by designers and universally misunderstood by anyone over 50.
- The product photography that's beautiful to the design team and confusing to the customer ("which one is in the box?").
- The dropdown that no user even sees because their eye-tracking goes elsewhere.
An expert auditor literally cannot reliably predict these. They emerge only from watching real users. So the second pass — after the obvious issues are fixed — is where the real product insight lives.
The exception: when usability testing comes first
One scenario where you'd reverse the order:
- You're testing a wholly new design or product with no prior user feedback. There's nothing yet to audit because there's no implementation. Lightweight usability testing of prototypes or wireframes is the right first move; heuristic audit applies once you have a real interface.
Another:
- You suspect the problem is fundamental misunderstanding of audience — e.g. you've launched a product to a market that doesn't actually want it, and you can tell from analytics. UX is downstream of that. Use user research (interviews and stakeholder work) first, then evaluate the design after the audience question is settled.
What each method costs
| Method | Typical cost | Duration | Findings volume |
|---|---|---|---|
| Heuristic audit | €1,200-€3,500 | 2 weeks | 30-50 prioritised findings |
| Usability testing (5 users) | €2,500-€4,500 | 4-5 weeks | 10-15 high-confidence findings |
| Both, sequenced | €4,500-€7,000 | 6-8 weeks total | 40-60 findings, mix of "obvious" and "surprising" |
What "5 users is enough" actually means
The Jakob Nielsen 1994 paper that established "5 users finds 85% of problems" remains broadly true for usability testing on a single user-segment task flow. What it specifically does not mean:
- 5 users is enough across all your audience segments. If you have B2B and B2C, that's 5 each.
- 5 users is enough for accessibility testing with assistive-tech users. Different abilities, different patterns; you need representation across the spread.
- 5 users gives you statistical certainty. It doesn't. It gives you qualitative signal — the right participants will surface the issues, but you can't quote percentages off five.
How we sequence engagements at Usability.ie
- Brief call. 30 minutes. We sort whether you need heuristic-only, both, or something more upstream like user research.
- Heuristic audit (week 1-2). We deliver findings; client team or digitaldesign.ie remediates the highest-severity items.
- Pause for fixes (week 3-5). Sometimes 2 weeks, sometimes 6 — depends on the size of the fix sprint.
- Usability testing (week 6-9). Now run on a cleaner version. Findings are higher-quality because the noise is filtered out.
- Final remediation. Implement second-wave findings. By this point you're addressing genuine product-design questions, not basic UX hygiene.
Who skips this and pays for it
The most common pattern we see: client commissions usability testing without an audit first, gets back a report dominated by issues like "users couldn't find the contact button" (an obvious heuristic failure), then has to commission a second wave six months later after fixing the obvious things. That second wave finds the actually-interesting issues, but the client has spent twice what was needed.
The next most common: client commissions a heuristic audit, fixes everything, considers the work done, and skips usability testing. They miss the surprising user behaviours that the expert audit could never have caught. Their conversion rate stays mediocre.
How to commission
For a heuristic audit: heuristic audit service, €1,200-€3,500. For usability testing: usability testing service, €2,500-€6,000. Combined engagement (both, sequenced): we scope on the brief call.
Read next
Need this kind of work done?
For an audit, get in touch — free 30-minute brief call, written scope within a working day. For full design + build engagement, our sister studio is digitaldesign.ie.