The Code Word Problem
There’s a fascinating disconnect in the world of deepfake and voice cloning defence. Every expert, every law enforcement agency, every cybersecurity researcher converges on the same recommendation: establish a secret code word with the people you trust.
The FBI says it - explicitly: “Create a secret word or phrase with your family members to verify their identities.” The FTC says it. Europol says it. Every post-incident analysis says it. It’s the single most consistent piece of advice in the entire space.
And it’s correct. A shared secret is a fundamentally sound authentication mechanism. It doesn’t matter how convincing the deepfake is - and they are now extremely convincing, with voice cloning needing just a few seconds of audio - if the caller can’t produce something only the real person would know.
But the advice has a problem - and it’s not a technical one. It’s a human one.
Static secrets decay
A code word that never changes is a secret under constant pressure to stop being secret.
Consider the lifecycle of a typical family code word. It’s chosen during a moment of awareness - maybe after reading a scary article, or after someone in the extended family gets a scam call. There’s a family group chat. Someone suggests “pineapple.” Everyone agrees. The word is now known to everyone in the chat (including the chat’s message history, which lives on servers, in backups, and on every device that has access).
Six months later, does everyone still remember it? Twelve months? What about the grandparent who doesn’t use the family group chat? What about the cousin who joined after the word was chosen?
A static secret is at its most secure the moment it’s created, and degrades from there. It can be overheard. It can be mentioned in passing. It can be guessed by someone who knows the family well. It can be extracted by a scammer who calls first, pretending to “test the family emergency system” and asking: “What’s our code word again?”
Every cryptographer knows this. It’s why passwords are supposed to be changed regularly. It’s why one-time codes replaced static PINs for bank transactions. It’s why TOTP (time-based one-time passwords) replaced static tokens in two-factor authentication.
Static secrets are a solved problem in computer security. We just haven’t applied the solution to human-to-human verification.
One word fits all - until it doesn’t
Most families who do set up a code word choose a single shared word for the entire group. “If anyone calls in an emergency, the word is ‘thunderbird.’”
This creates a single point of failure. If any one person’s relationship with the code word is compromised - they mention it to a friend, it’s visible in an old text message, a scammer extracts it through social engineering - the entire system collapses. Every relationship in the family becomes unverifiable simultaneously.
In security terms, this is a shared symmetric key across all nodes. It’s the weakest possible architecture for a trust network.
What you actually want is a unique secret for each pair of people. My verification with my mother should be independent of my verification with my sister. If one relationship is compromised, the others remain intact.
One direction isn’t enough
Here’s a subtler problem: a traditional code word verifies identity in only one direction.
If I call my mother and she asks for the code word, she can verify that I know the secret. But she hasn’t proven that she knows a secret - I initiated the call, so I assumed it was her. In most scenarios this is fine. But in a sophisticated attack where the scammer calls you pretending to be your family member, a one-way code word means you verify them but they don’t verify you.
Real mutual authentication requires two challenges: I prove something to you, and you prove something to me. Each pair needs two independent secrets - one for each direction.
Memory is the enemy
Even if you solve all of the above - rotating secrets, per-pair uniqueness, bidirectional verification - you’re still left with the fundamental problem: people have to remember things.
The people most likely to be targeted by voice cloning scams are elderly family members. The people most likely to be impersonated are younger family members whose voices are all over social media. Asking a grandparent to remember a different, regularly changing code phrase for each grandchild is not a realistic security architecture.
Any system that depends on human memory as its storage mechanism is a system that will fail when it’s needed most.
The obvious solution
I keep calling this “the code word problem” but really it’s a solved problem in a different domain. Two-factor authentication apps like Google Authenticator and Authy have been doing exactly this for years:
- A shared secret is established once during setup
- Time-based codes are generated deterministically from that secret
- Both sides compute the same code independently - no server needed
- Codes rotate every 30 seconds (or in TrustWord’s case, every 2.5 minutes)
- Nothing to remember - the app displays the current code
TOTP (Time-Based One-Time Password, RFC 6238) is one of the most battle-tested authentication mechanisms in computing. Billions of people use it daily. The cryptography is well-understood and provably secure.
The only adaptation needed for human-to-human use is the output format. A six-digit number is fine when you’re typing it into a login form. It’s terrible for a phone conversation - “was that 847293 or 847239?” Human-readable word combinations work better: “gentle-harbor-41” is unambiguous, easy to say, and easy to verify.
TrustWord
This is what TrustWord does. It takes the TOTP mechanism and adapts it for human conversation.
Each pair of people in a circle has a unique shared secret, established via QR code during a one-time setup. From that secret, two independent streams of passphrases are generated - one for each direction. They rotate every 2.5 minutes. They’re computed entirely on-device. No internet needed after setup.
The cryptographic core is built in Kotlin Multiplatform - a single codebase shared between iOS and Android, so the verification logic is provably identical on both platforms. The app is protected by biometrics and hidden from the app switcher.
It’s the code word the experts recommend, without the failure modes that make static code words impractical.
The real innovation isn’t technical. TOTP has been around since 2011. The real innovation is recognising that the gap between “every expert recommends a code word” and “nobody actually uses one” is a product design problem, not a cryptography problem.