
The AI Doc Is Not the Story. The Empty Theater Is
This article examines what the release of The AI Doc: Or How I Became an Apocaloptimist reveals, not just about artificial intelligence, but about public awareness, leadership readiness, and the widening gap between urgency and attention.
In my previous articles, I made a clear argument.
First, that AI ethics is not about the machines, it is about us.
Then, that it was time to move that conversation out of theory and into real-world experience.
This is what happened when I did.
The First Signal Was Not the Film
I attended a Saturday matinee on March 28, the day after its release.
The theater was almost empty.
Two other people.
That was it.
At first, I considered the usual explanations. Timing. Location. Competing priorities.
But the broader data tells a more concerning story. Opening weekend figures indicate approximately 40,000 tickets sold across 786 theaters, averaging only a handful of viewers per showing¹.
As someone who has spent four decades studying technology adoption and its societal impact, my reaction was immediate and visceral.
What is going on?
Is it overconfidence?
Do people believe they already understand AI?
Or is it something more dangerous, a level of disengagement that suggests we are not even paying attention to one of the most consequential shifts of our time?
Because the most important signal that day was not on the screen.
It was who was not in the room.
I Went Looking for Insight. I Found Silence
I stayed after the film intentionally.
My goal was simple. Talk to other viewers. Understand how the film landed.
One woman had brought her elderly, wheelchair-bound mother. I approached her and asked what she thought.
Her response was brief.
“I don’t know.”
And she left.
No reflection. No curiosity. No engagement.
That moment stayed with me.
Because it reinforced something I have observed repeatedly across decades of technological change.
Exposure does not equal understanding.
And understanding does not guarantee action.
What the Film Gets Right
To its credit, the film brings together a wide range of influential voices across the AI landscape.
From Eliezer Yudkowsky to Yoshua Bengio to Sam Altman and Dario Amodei, it presents sharply contrasting perspectives on risk, acceleration, and the future of intelligent systems.
I appreciated that.
It reflects the reality of the field.
There is no consensus.
Only competing models, incentives, and interpretations.
For viewers new to the conversation, the film serves as a useful introduction to the current landscape of AI discourse².
Where the Film Holds Back
This is where I struggled.
Not because I expected definitive answers.
But because I expected deeper clarity.
There was no meaningful synthesis.
No moment where the conversation advanced beyond competing viewpoints.
The narrative device, driven by director Daniel Roher portraying a concerned, soon-to-be father trying to determine whether it is safe to bring a child into this world, constrained the depth of the dialogue.
It kept returning to a familiar question:
Are we going to be okay?
But it did not push far enough into the more important one:
What must we do differently?
When Intelligence Becomes Its Own Blind Spot
One pattern stood out clearly.
Highly intelligent individuals, many of them leaders in their field, deeply engaged in theoretical constructs, often at the expense of practical clarity.
The conversations were sophisticated.
But at times, they felt disconnected from real-world decision-making.
This is not a criticism of intelligence.
It is a recognition of a common failure mode.
When complexity increases, people tend to retreat into models.
And in doing so, they can lose sight of what actually matters.
Judgment.
Responsibility.
Action.
The Real Risk Has Not Changed
If anything, the film reinforced what I already believe.
The primary risk is not AI acting independently.
It is humans misusing it, misunderstanding it, or deploying it without sufficient foresight³.
That is where history consistently points.
Technology does not fail on its own.
It fails through human decisions.
And those decisions are being made faster than most leaders are prepared to handle.
Why This Was Harder to Write Than Expected
I expected to walk out of this film with a clear position.
I did not.
And that is why this article took longer to write.
Because the most important takeaway was not cinematic.
It was behavioral.
The film raises important questions.
But the real story is whether people are willing to engage with those questions at all.
Right now, the answer appears to be:
Not at scale.
This Is the Conversation That Matters
We are not lacking information about AI.
·We are lacking engagement.
·We are lacking disciplined thinking.
·We are lacking leadership readiness.
That is the gap that matters.
Because the future of AI will not be determined by what is possible.
It will be determined by how humans choose to respond.
And right now, too many are choosing not to.
FAQ
What is The AI Doc: Or How I Became an Apocaloptimist about?
The film presents a range of perspectives from leading AI thinkers on the risks and potential of artificial intelligence, focusing on long-term societal impact and human decision-making.
Is The AI Doc worth watching?
The film is valuable as an introduction to current AI debates, especially for those new to the topic. However, it does not provide definitive answers or a clear path forward, making it more of a conversation starter than a conclusion.
What is the main takeaway from The AI Doc?
The film reinforces that the greatest risks associated with AI are not purely technical, but human, including how AI is developed, deployed, and governed.
Why are so few people watching The AI Doc?
Early box-office data and firsthand observations suggest limited public engagement. This may reflect overconfidence, lack of awareness, or a broader disconnect between the importance of AI and public attention.
Who is the director of The AI Doc?
The film is directed by Daniel Roher, who also appears in the film as a central narrative voice exploring whether it is safe to bring a child into a future shaped by AI.
Why does leadership readiness matter in AI adoption?
AI introduces complex, high-stakes decisions that require clarity, judgment, and responsibility. Without leadership readiness, the risks of misusing or misunderstanding AI increase significantly.
References
1.Box Office Mojo.
Domestic Weekend 13, March 27–29, 2026 Box Office Results.
https://www.boxofficemojo.com/weekend/2026W13/
2.Center for Humane Technology.
The AI Doc: Or How I Became an Apocaloptimist.
https://centerforhumanetechnology.substack.com/p/the-ai-doc-or-how-i-became-an-apocaloptimist
3.Adriana Vela.
Brain Science For The Soul.
https://author.adrianavela.expert/brain-science-for-the-soul
4.Adriana Vela.
AI Ethics Is Not About the Machines, It Is About Us.
https://markettecnexus.com/post/ai-ethics-is-not-about-the-machines-it-is-about-us
5.Adriana Vela.
I Wrote About AI Ethics. Now It Is Time to Experience It.
https://markettecnexus.com/post/i-wrote-about-ai-ethics-now-it-is-time-to-experience-it
