Voice AI · Intelligence Research

Wisdom
Illuminated

Teaching AI to understand the world — one real conversation at a time.

We believe voice will be the universal interface between humans and machines. But for AI to truly master it, it must learn in the real world — not a lab. We are a team of researchers running a large-scale real-world experiment to build voice AI that genuinely understands people — starting with the most human of interactions: taking a dinner order.

Two Beliefs Driving Everything We Do

These aren't product hypotheses. They are long-term bets on how intelligence and interaction will evolve.

Belief One

Voice Is the Most Natural Interface

Humans have always communicated by speaking — long before keyboards and touchscreens, there was conversation. Voice isn't a feature; it's the most natural way for a machine to meet people where they already are. As Satya Nadella said, “Human language is the new UI layer.”

But voice is only the medium. What makes a conversation feel good — or frustrating — is the intelligence, empathy, and theory of mind behind the words. That's why voice is where we start, and the AI reasoning layer is where we spend most of our effort.

Belief Two

Intelligence Is Earned Through Experience

The best AI won't emerge from static training data alone. David Silver and Richard Sutton wrote in Welcome to the Era of Experience (2025): “A new generation of agents will acquire superhuman capabilities by learning predominantly from experience.”

We take that seriously. Our agents learn by doing — in real environments, with real customers, handling the messy and unpredictable conversations that no benchmark can simulate.

Our Strategy

The Restaurant as a Living Laboratory

Restaurant ordering is the perfect starting point: high-frequency, low-stakes, and rich with the kind of human nuance — modifications, preferences, rushed conversations — that forces AI to truly listen and adapt.

Every call our voice agent handles is a data point in a larger experiment: can AI learn to navigate a real business transaction safely, with the consistency and care of a trained employee?

In
Dev

Order Accuracy

We're actively measuring and improving accuracy in noisy, real-world restaurant environments. Our target: consistently above 95% before we scale.

70%

Patient Preference

In a peer-reviewed clinical study, patients preferred our AI voice calls over standard text and email outreach — validating voice as the right medium.

Restaurant Voice Agent

Our voice agent handles phone orders with multilingual support and natural conversation flow. Currently in active development and real-world testing.

In DevelopmentMultilingual

Healthcare Outreach

Clinical AI for longitudinal patient tracking and surveys. Safety and compliance techniques from this domain feed directly back into our restaurant work.

HIPAA CompliantIEEE Published

The Safety Feedback Loop

Our team's roots are in healthcare AI — one of the most demanding environments for safety and compliance. We apply that same rigor to the restaurant context, and breakthroughs in either domain reinforce the other.

1Test in the Field

Deploy voice agents into real restaurant environments. Learn from real customers, real noise, and real edge cases — the things you can't simulate.

2Measure and Improve

Every interaction is logged, reviewed, and fed back into training. We treat each call like a scientific observation — hypothesis, evidence, iteration.

3Apply Across Domains

Techniques proven in the restaurant lab reinforce high-stakes domains like healthcare. A safe AI in a low-risk setting becomes a safer AI in a high-risk one.

We're Building This in the Open

We're a small team of researchers and engineers with a long-term thesis. If you share our curiosity about voice, AI safety, and the future of human-machine interaction — we'd love to talk.