Jeff Bezos on Generative AI: âTheyâre not inventions. Theyâre discoveries.â
What generative AI tells us about human consciousness
A new kind of technology is entering public consciousness, and it has some truly unique properties. Itâs perhaps the first major technological revolution that wonât require a hardware change.
Its power will be felt through old devices. Itâll change phone calls as much as it will software programming. It will reach us by screen and through mail, in information queries and customer service complaints.Â
Its infrastructure is already laid.Â
Lex Fridman asked Jeff Bezos what he thought of generative AI, the large language models behind booming platforms like ChatGPT, Dall-E, Grok, and soon, many others.Â
Bezos: âThe telescope was an invention. But looking through it at Jupiter, knowing it had moons, was a discovery. Large language models are much more like discoveries. Weâre constantly getting surprised by their capabilities.âÂ
Itâs a fascinating assertion, and itâs a unique explanation for this problem. Why are we surprised by a technology we created?
Bezos: âItâs not an engineered object.âÂ
What, exactly, are we discovering?
According to doomsayer technopriests and at least one former Google engineer, this is a step on the path to a new kind of consciousnessââgeneral AI.â But when asked if his platform was conscious, or close to it, the founder of ChatGPT Sam Altman said, âNo, I donât think so.âÂ

Bezos: âWe do know that humans are doing something different in part because weâre so power efficient. The human brain does remarkable things and it does it on about 20 watts of power⌠The AI techniques we use today use many kilowatts of power to do equivalent tasks.â
He uses the example of driving to illustrate the point. To learn the rules of the road, self-driving cars require billions of miles of driving experience. An average 16-year-old learns to drive in about 50 hours.Â
Despite generative AIâs incredible feats of passing both the Bar exam and the US Medical Licensing exam, this technology seems to learn and think very differently than the fat computers in our skulls do.Â
It also tends to lie.Â
Generative AI tends to lie.
A Duke University researcher found that ChatGPT would fabricate sources that sound scholarly but arenât real, which started a cascade of follow-on pieces about why ChatGPT âlies.â
But to say that ChatGPT âliesâ is to put conscious agency on an unconscious algorithm. We use the metaphor of consciousness to describe our interactions with generative AI because itâs useful shorthand, but thereâs an important distinction to be made here.
In his conversation with Bezos, Fridman observes, âIt seems large language models are very good at sounding like theyâre saying a true thing, but they donât require or often have a grounding in a mathematical truth. Basically, itâs a very good bullshitter.âÂ
Bezos responds, âThey need to be taught to say, âI donât know,â more often.â
What weâre discovering through the telescope of generative AI isnât a new kind of consciousness. First and foremost, weâre discovering the immense power bound up in the huge amount of information weâve amassed in the building of the Internet.
Discovery 1: Scaling up access to information allows technology to produce strikingly convincing, uniquely creative, mostly-accurate results.
What weâre discovering through the advent of large language models is that simply by scaling up the available information, technology is able to produce astounding results.
The creativity that many hoped would be the final bastion of human competence seems not to be in the brush of the painter or the keystrokes of the writer. If youâve enjoyed the images in this article so far, they were created with Midjourney by prompting Quentin Blake-inspired illustrations.
Rather, what weâre finding is that as we create more and more complex tools, itâs our unique ability to find applications and leverage points for those tools that separate human consciousness.Â
Itâs our ability to discern signal in an infinitely noisy universe.
Discovery 2: Our consciousness is unique in its ability to detect signal in an infinitely noisy universe.Â
When Dall-E fails to capture the subtle anatomy of a hand, itâs immediately obvious to our eye. Something feels off.Â
A subtle mistake made by an exceedingly-convincing, authoritative-sounding tool with a 99% accuracy rate and access to nearly every piece of available data is detected by a 20-watt brain thatâs powered by ham sandwiches.Â
Even the simple process of developing AI illustrations is one of discerning signal among noise. Thereâs a natural feedback process in the use of a language-to-image tool, and we donât really need to learn it. We input a prompt, we get a result, and we adjust our prompt.Â
Weâre discerning where there is meaningâor something close to it. Weâre adjusting the framework by offering new prompts. And we discern from the things created which are worth keeping.Â
As new generations of tools are placed in our hands, itâs our role as curators and strategists that will persist. While there are some strong emotions around technology with the potential to displace creative jobs, these tools are so immediately useful that only forcible intervention will slow them down. Itâs hard to stop humans from building on the work of the past.Â
In a Discord conversation with the Midjourney community, its creator David Holz said that he wanted the tool to âbe like a paintbrush. I want everyone to have it.âÂ
If Michelangelo created the Sistine Chapel with paintbrushes, what will we create with generative AI?
Fascinating article, with lots to discuss, but I want to focus on a common misconception:
"An average 16-year-old learns to drive in about 50 hours."
This is a massive misunderstanding.
A human learns to navigate the world for 16 years, then applies driving as a layer on top of that, adapting experience from activities like walking, biking, and riding in vehicles then applying that experience to this new context. You can't put a baby in a coma for sixteen years, wake them up, and expect them to do anything, let alone learn to drive.
In that context, 50 hours is just long enough to learn the novel parts of driving, like how the pedals feel and when to check the mirrors.
With that in mind, that comparison overestimates human ability by quite a bit.