March 6, 2024

The Rabbit R1 chronicles.

Chapter 2: Cheating, stinking, lying LLMs.

Before we talk about the innovative LAM on Rabbit R1, we need to understand where AI stands today and how Rabbit fits into that.

What’s worried me most about AI in general is that the Large Language Model—which AI to date has been based on—can be ridiculously flawed. We’ve already seen the horrible mistakes it can make in giving false information, bad advice or interacting with humans. Like a chatbot that swears at customers. Or ChatGPT medical advice that’s only accurate 17% of the time. Or Copilot telling someone it’s okay to go ahead and commit suicide. The problem is that AI is advancing stunningly fast and companies never anticipated putting the guardrails into place would be this difficult. So far AI-policing-AI seems to less accurate than the AI itself.

When she was told something that smelled fishy, my grandmother always said: Consider the source. And LLMs have yet to prove themselves as accurate sources. Even if right 99% of the time, that 1% error rate could have serious consequences. AI companies seem to have a surprisingly-small human component as part of AI learning and truth-monitoring, so LLMs can quickly turn into rumors mills. In the mind of the public, a lie becomes true the more often it’s repeated. Since AI seems to be acquiring its facts from other AI now, the dark side is that we’ve just automated the rumor mill.

In order for AI to elvolve into a good robot, the human element must always be part of that equation. If we-the-people aren’t fact checking, evaluating ethics and tempering with common sense—we can’t expect AI to ever learn human decency. If we humans don’t care about Asimov’s Three Laws of Robotics we can’t expect that technology will ever obey them on its own.

It remains to be seen how Rabbit R1 will play with the problem children and bad seeds of AI. Indications are that Rabbit will be more of a messenger for existing AI than presenting its own knowledge. It can work with many LLMs and accuracy will only be as good as the LLM itself. So if you use Gemini, Grok or ChatGPT through Rabbit and they tell you something you know is not true, Rabbit is likely to just relay that false information to you (but in a cute little bunny voice).

But there’s huge opportunity here for Rabbit to become an AI voice-of-reason. Maybe it’s as simple as working with the multiple AIs and having Rabbit quote them all and then let the user decide what’s valid. (The problem there is that we’re using AI to make things more convenient in the first place and having to wait for more data and humanly filter it would just not be appealing to us.)

Or maybe (since we can train our Rabbit) we can train it fact check the answers it presents us as well. How? That remains to be seen it once it’s in our hands. Fact checking would involve at least two actions for Rabbit: getting an answer from an LLM, then verifying it through some other service, say snopes.com. Some have commented that the existing Rabbit demos seem to be focused on quickly implementing just one action so it’s not known how well Rabbit can do two or more actions based on one command. And even if Rabbit is up to the task of policing AI, if it smells fishy, you-the-human must be the last-line of common sense and common decency.

In a 1958 interview with George Plimpton, Hemingway made a statement about good writing that applies to AI today, especially since we’ve already handed much non-fiction writing over to it: “The essential gift for a good writer is a built-in, shockproof. shit detector.” Now if Rabbit R1 had one of those…

Read Chapter 3: Virtual Assistance, The Next Generation.

Check out the entire Chronicles.


I have no affiliation with Rabbit Inc. I’m just an early adopter. If you want to support this journey into the Rabbit, buying me a coffee below helps keep the articles coming.


Why buy me a coffee? No third-party ads, no affiliate links, no tracking cookies. Just honest content. Thanks.


Rabbit-Chronicles


Previous post
Chapter 1: Couldn’t it have just been an app? Though I’m sometimes an early adopter, I shunned AI. But Jesse Lyu did a keynote on Rabbit R1 and I bought one before he stopped talking. Because
Next post
Chapter 3: Virtual assistance, the next generation. Though Rabbit can connect with multiple LLMs and spit out the answers to your question, its superpower is that it can do actions through its LAM. As
All content ©J. Kevin Wolfe