A user emailed us this week asking for a refund.

“Your app doesn’t understand my pronunciation. I used Rosetta Stone before and never had to repeat phrases five or six times.”

My first instinct was to investigate. Maybe our speech recognition was broken. Maybe there was a bug.

So I pulled up her session data, watched her replays, checked every error log.

Nothing was broken. The speech recognition transcribed her correctly. She could see exactly what the system heard. She could even edit the text before submitting.

Her pronunciation scores averaged 76 out of 100. Not bad for a beginner. But not great either.

The system was working exactly as designed. And that was the problem.

The dirty secret of language learning apps

Here’s something most language learning companies won’t tell you: the speech recognition in many popular apps is deliberately lenient.

Rosetta Stone’s TruAccent technology has been tested by independent reviewers who deliberately mispronounced words and still got approved. Say the wrong thing, get a green checkmark. It feels great.

Duolingo has the same issue. “Duolingo doesn’t understand me” is such a common complaint that there are entire guides dedicated to it. But the opposite problem is more common — it understands too much and approves pronunciation that wouldn’t pass in a real conversation.

This creates a fascinating dynamic: apps that are more accurate get more complaints than apps that are less accurate.

Users don’t blame themselves for bad pronunciation. They blame the app. Every time.

The tension nobody wants to talk about

When I started building Copycat Cafe, I had to make a choice.

Option A: Be lenient. Users feel successful. Retention goes up. They keep paying. But they never actually improve their pronunciation. They show up in Paris, order a croissant, and the waiter switches to English.

Option B: Be honest. Users sometimes get frustrated. Some ask for refunds. But the ones who push through actually learn to speak French. They order that croissant and the waiter responds in French.

We chose Option B.

Not because it’s better for business. It’s clearly not — at least in the short term.

But because I’ve been teaching French for 10 years, and I’ve seen what happens when learners get false confidence. They plateau. They stop improving. And eventually, they quit anyway because they realize they can’t actually speak the language.

What the data actually shows

After analyzing our user data, I found something interesting.

Users who push through the initial frustration — who keep practicing even when the system tells them their pronunciation needs work — consistently improve their scores over time.

The system isn’t punishing them. It’s teaching them.

But I also found something uncomfortable: only 6 out of hundreds of users discovered they could switch to a simpler review mode that doesn’t require speaking at all.

The feature exists. It’s in the same settings menu they use to change audio speed or voice selection. But frustrated users don’t look for settings. They write angry emails.

What I learned from this

Not everything is under my control.

I can’t control whether a user blames the app or blames their pronunciation. I can’t control whether they explore the settings or go straight to the refund button.

What I can control is whether our tool actually helps people learn French.

And sometimes that means being the app that makes you repeat a phrase five times until you get it right, instead of the app that tells you everything sounds perfect when it doesn’t.

Is it comfortable? No.

Does it work? Yes.

And I’d rather build something that works than something that just feels good.