Pre-orders for the Juno Pioneer Edition now open, reserve your Juno today!

How we validate our custom AI hardware concept using iPads

Wednesday, 11 February 2026 · Adam Juhasz

The Juno prototype that sits on a family's kitchen counter right now is an iPad, a USB-C speakerphone, and a power adapter. That's the whole thing. Three items in a padded mailer. The family plugs it in, opens an app, and they have an always-listening AI assistant on their counter by dinner.

The inference? That runs on a rented A10 GPU at Lambda, hundreds of miles from their kitchen.

This will bother you if you've read anything else we've written, because Juno's entire promise is that nothing leaves your home. No cloud. Local inference. Privacy as architecture, not policy. And here I am telling you the prototype streams audio to a data center.

But the question we needed to answer first wasn't "can we run inference locally?" We have a bench-top prototype that does exactly that. The question was: will a family actually use this thing every day? Will they talk to it? Will they check it in the morning? Will it become part of the kitchen the way a coffee maker is part of the kitchen? You can't answer that question with a dev board on a lab bench. You answer it by putting something in someone's home and seeing what happens over weeks.

The other option was worse

We had two paths to get prototypes into kitchens.

Path A: ship a Nvidia Jetson AGX Orin dev kit. It's a full Linux system with a sizable GPU. Runs our full inference stack locally. Exactly what we want for the final product. It also costs $2,000 per unit, needs an external LCD wired up to it, requires a separate microphone, and looks like a science fair project that lost.I love the Jetson hardware industrial design. It just doesn't belong on someone else's counter next to their fruit bowl.

Path B: ship an iPad. Families already know what an iPad is. Kids already know how to use it. The screen is good. It has speakers and a microphone.We found out quickly that the iPad's built-in mic was not optimized for listening to an entire room. It worked fine for people standing right in front of it but was terrible for off-axis speakers. That's why we added the external Anker speakerphone, which is designed to pick up a whole room. It sits on a counter without anyone asking "what is that." The total hardware cost per prototype unit is under $500 (refurbished iPad, Anker PowerConf S330 speakerphone, a power adapter). I can assemble a kit in an afternoon and have it on someone's counter the next day.

The Jetson approach validates the wrong thing at the wrong time. Yes, it proves local inference works. We already know local inference works (see below). What we don't know is whether the product works. Whether people will actually change their behavior if they have an always-on ambient assistant. That's the existential question for the company, and you answer it with whatever gets Juno into homes fastest.

The Juno prototype iPad kit sitting on a kitchen counter with the Anker speakerphone
The prototype kit: an iPad, an Anker speakerphone, and a power adapter.

How the kit actually works

The iPad runs a native iOS app. Its only real job is capturing audio from the Anker speakerphone and streaming it to the cloud. We tried doing this in the browser through a progressive web app first and it was a mess.Safari kills background audio streams. WebRTC keeps reconnecting. There's a permission popup every single time. The browser approach might work for a one-off demo but it does not work for something that needs to run 24 hours a day without anyone touching it. The native app is tiny. It captures audio, streams it over a websocket, displays a webview, and stays alive in the foreground. That's it.

Everything the family sees on screen is a web app. The memory list, the shopping list, the calendar, the conversation interface. All of it loads in a web view pointed at our server. This means when we push a UI change at 11pm, every prototype in every kitchen has the new version immediately. No App Store review. No re-flashing a device. No asking a family to do anything. The feedback loop from "I wish it did this" to "it now does" is measured in hours, not weeks.

Same models, different rack

The important thing about the cloud prototype isn't that it uses cloud GPUs. It's that it runs the exact same models that will ship on the local hardware. Not a different model. Not a bigger model. Not a cloud API standing in for something smaller.

Keeping ourselves honest with a Jetson or two

Cloud inference can tell you whether a model is capable of doing what you need. Can we transcribe a noisy kitchen with the correct WER extract a dentist appointment from a sentence? Word Error Rate measures what percentage of words are transcribed incorrectly. A WER of 10% means 1 in 10 words is wrong. Can our TTS model read back a shopping list without sounding like a robot? The A10 on Lambda answers all of that. What it can't tell you is whether any of it will run fast enough on hardware that fits in a home.

That's what the Jetson is for.

The Jetson AGX Orin bench-top prototype running the full local inference stack
The Jetson bench-top prototype running the full stack locally.

We have a bench-top prototype built around the Nvidia Jetson AGX Orin running the full stack locally: ASR, LLM inference, TTS, embedding. Same models, same pipeline, no network round-trip. It is not pretty and does not travel well. But it keeps us honest about what local inference actually feels like.

The two prototypes test different things. The iPad prototype answers "do families want this?" The Jetson bench-top answers "can we actually build it?" You need both. A product that people love but can't run locally is a cloud product with a privacy problem. A product that runs locally but nobody uses is an engineering exercise. We're trying to avoid both failure modes at the same time.

What we're learning

A fun discovery is that placement matters more than we expected. We assumed "kitchen counter" and left it at that. But which part of the counter? One family put it next to the stove and the exhaust fan drowned out half their conversations. Another put it on the breakfast bar and it picked up TV audio from the living room constantly. Today we ask users to re-position the device but we also now know that we'll need to solve voice isolation before shipping.

The privacy asterisk

We should be direct about this. Juno's advertises that your data never leaves your home. The prototypes send audio to a cloud GPU. Those two statements are both true and they are not in conflict, because the prototypes are not the product.

Every family running a cloud prototype knows how it works. We sit down with them before deployment and explain the architecture: the iPad streams audio to our server, the server runs Juno, audio comes back. We're honest about what data we have and how we control access to it.