Voice UX
Voice UX might be my next career bet. Here's why.
Last Thursday at Canva's AI Vision, I had a moment: we're living through the same shift that killed BlackBerry over Apple Iphone, except this time, we're the ones clinging to screens.
We're moving from interface-first to conversation-first design.
The shift I'm seeing:
- No more "interface first" → conversation first
- UX writing becomes voice & tone design (character, pacing, how the AI speaks to you)
- Accessibility expands: we've centered vision (screen readers, contrast), now we need to equally center Deaf/HoH users as voice becomes primary.
Raw Studio calls it "minimizing effort between intent and response." Voice minimizes the gap between intent and response. Not to replace screens, but for when our hands and eyes are busy: driving, caregiving, cooking. That's most of life.
- Cars: Tesla's iPad interface feels wrong because we're already voice-native in cars.
- Healthcare: clinicians dictate notes by voice, screens just confirm.
- Public services: need voice as the front door for millions of phone-only citizens.
What excites me? Voice-first public service journeys. Real pilots where voice does the work and screens are the receipt. Built with Deaf users from day one. Boring, safe, radically inclusive.
This means a new role is emerging: VUI (Voice User Interface) specialists, people who understand conversation design, error recovery, cross-device sync, and multimodal accessibility.
Hot take: If UX keeps optimizing screens for contexts where screens don't work, we're the Blackberry crowd. The innovation now is in designing conversations, not layouts.
Sydney UX folks, anyone else feeling the pull toward voice? 👀

