I guess by now (Aug '24) some or even most of the hype around dedicated AI hardware has fizzled out. But It's not over.
The two big devices from earlier in the summer were Rabbit and Humane. Both had big launches and attracted lots of buzz. Rabbit took a lot of shots at Humane for not having a screen. But fast forward and nobody has either device.
The word this week was that people are returning their Humane devices faster than they are selling them (oh and they can't currently be refurbished).
I have no idea what is happening with Rabbit. I'm not sure I ever did.
Last week though, we got Friend, which I wrote about quickly. And this weekend I spent some time talking about two parts of this whole hardware x AI space. And I've been trying to figure out what the idea is behind these dedicated hardware plays - what, if any, common ideas they share? What is their theory of change so-to-speak?
The best I can up with is something like this...
Old world: software is made once, it's updated slowly overtime with new features, 'learns' little about you
New world: software is made once, has the ability to 'learn' in real time and can be updated every min
We've built 'AI' software that is now in...everything. And it can, in theory, learn about you and your behaviour. What you search, type, say, look at, etc. All kinds of inputs. And with the models and chips, this can all be very quick and very accurate. In a way that hasn't been possible before.
So the challenge for new companies and products is, how do you get a steady flow of inputs into your product?
And I wonder if the desire for a stream of inputs is what is driving the hardware stuff. Because if your product was "just an app" you can't have it listening all day to you, or filming what you see, etc. The battery will die. You use your phone for other things. So it's a non-starter.
But if I removed the screen, put a cheap mic in or a tiny camera and convince you (importantly) not to keep it in your pocket, then I can pick up all kinds of data to train my model on that the phone and app's can't capture.
The trick is, I have to convince you to carry or wear this thing.
Is that the deal? Give us data, we give you really useful products?
Hasn't that always been the deal? Browser history, likes, follows, etc. in exchange for free software? Why does this feel different?
There is something about the state goals this time that feels weird.
If I am exchanging my app behaviour for targeted ads and a helpful app, that is one thing.
Listening to me so you can invent a 'friend' that feels different.
There is a lot more to say about the pricing, business models, ux, etc. of these hardware plays, but for now, I remain super skeptical that they will get the customer adoption required to be a thing.
With one exception: I tired on the Meta Rayban glasses in April and thought they were pretty amazing. The audio without earphones, plus the ux of capturing (great quality) video was awesome. They feel different to the products mentioned above. They feel more utilitarian, and surveillance-y.
I'm really interested to see what happens in that space. Might be lots of acqui-hires coming.