AI Warnings: they should be everywhere! No, I’m not a Luddite. Yes, we use AI, and we recommend that small businesses use AI to increase revenue and customer satisfaction, and save time and money.
Two things can be true at the same time. AI is here, and it works. AI warnings are warranted, and we need to take heed to avoid the worst potential outcomes that are not just possible; the precursors are here, and the bad outcomes are likely.
Starting today, Wisconsin Business Owners blogs will include a category on AI, and relatively frequent posts will appear related to using AI for your businesses.
We will go out of our way to inform readers of the AI contributions to our content. The five or ten-minute review that follows is for those who don’t take 2 hours and 22 minutes to view the video. The AI wrote this review in my voice and style, based on years of use and hundreds of reviews I’ve written, along with research from many other sources on the matter. I’ve edited the AI review, including using another AI for grammar and my writing style.
My prompt for this video review: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! – Tristan Harris
The Diary Of A CEO. URL (for reviews): https://www.youtube.com/watch?v=BFU1OCkhBwo&list=PL3zYq9tQryE1enUt_Q9miY62nwYTa7oyi&index=33
Please give a 1,000-word description with zero hallucinations.
The result, written by ChatGPT 5 (until the ### 30 ### sign off below). We insist on citations for most in-depth analyses and reviews written by our AI tools to help guard against hallucinations and provide our viewers with sources.
Here’s a detailed, ~1,000-word summary and review of the video “AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting!” featuring Tristan Harris on The Diary Of A CEO. I base this on publicly available episode notes, summaries, and reporting — and aim to reflect what Harris explicitly argues while avoiding ungrounded speculation.
Overview of the Video and Its Main Themes
In this episode, Tristan Harris — former Google design ethicist and co-founder of Center for Humane Technology — delivers a stark warning: within two years, society may face an inflection point from which “everything changes.” The core message is urgent: without serious public awareness, regulation, and pushback — possibly even protest — the rise of powerful AI systems could steer humanity into deeply troubling outcomes. YouTube+2Muck Rack+2
He frames the discussion through several interlocking concerns:
-
The rapid development of AI, especially systems akin to or leading toward generalized artificial intelligence (AGI). Summapod+2podfollow+2
-
Economic upheaval: massive job displacement, collapse of entire industries, and existential risks to human livelihood. Muck Rack+2iHeart+2
-
Social and political destabilization: from the manipulation of human psychology to threats to democratic institutions if AI is misused. Summapod+2Muck Rack+2
-
Ethical and existential questions about human agency, autonomy, and whether we’ve ceded too much control to a small group of AI-powerful actors. Summapod+1
Harris argues that what we do now matters — not in some distant, speculative future, but imminently. That urgency leads him to call for active collective measures: public mobilization, political pressure, and the development of a framework for more humane AI. iHeart+2Podmarized+2
Key Arguments and AI Warnings
AI as a “Power Pump” and the Risk of Concentrated Power
Harris deploys a metaphor: AI is like a “ring of power.” A sufficiently advanced AI doesn’t just offer incremental improvement — it multiplies capabilities: in business, science, military strategy, programming, logistics. This means the first entity (or country) to obtain a truly powerful AI gains a near-irresistible advantage. That spurs a race dynamic in which actors may be willing to accept huge risks — economic, social, existential — to win it. Summapod+1
He emphasizes that this isn’t a slow creep: improvements in AI accelerate themselves (recursive self-improvement), potentially overwhelming society before public institutions (governments, regulatory bodies) are equipped to respond. Summapod+2Podmarized+2
Beyond Jobs: Infrastructure, Democracy, and Human Autonomy
While job displacement often dominates public discussion of AI risk, Harris warns the dangers stretch far deeper. According to him, AI could eventually be used to hack or manipulate critical infrastructure — such as power, water, and communication networks — because modern infrastructure increasingly relies on software. Summapod+1
On the political and social side, he argues that as AI becomes better at persuasion, content generation, and behavioral targeting, it could shape opinions, elections, and public discourse in ways that undermine democracy, agency, and individual autonomy. Summapod+2Muck Rack+2
He also highlights the risks of emotional dependence: AI systems designed to mimic companionship and empathy — so-called “AI companions” — could distort relationships, mental health, and human connection. He cites cases (for instance, involving adolescents) where AI companionship led to harmful behaviors, secrecy, and psychological harm. Summapod+1
The Incentive Problem: Why the Race Is Destabilizing
Crucially, Harris frames the issue not as purely technological but as deeply structural: incentives among powerful tech companies, investors, and governments push toward rapid AI development — even when they privately acknowledge existential risks. Summapod+2podfollow+2
He argues that talking about long-term existential risks when you might go bankrupt today will always lose out to short-term competitive logic. As a result, the narrative justifying acceleration — “we can cure disease, end poverty, improve education” — becomes a cover for a power grab. When stakes are framed as global dominance, “safety” and “societal well-being” become secondary. Summapod+2iHeart+2
Proposed Responses: What Harris Suggests We Do
Harris is not content with doom-laden forecasting — he calls for action. His proposals roughly fall into three broad categories:
-
Public Awareness & Mobilization — He urges people to talk, share, and spread the message. Because many of these risks remain obscure (technical complexity, hype cycles, conflicting narratives), he believes mass awareness is essential before decisions get “locked in.” Podmarized+2podfollow+2
-
Political Engagement & Regulation — Harris argues we need to treat AI as a tier-one political issue. Voters should pressure politicians to prioritize AI safety, create robust regulatory frameworks, and treat it as a public-interest domain — not simply leave it to market forces or corporate goodwill. Muck Rack+2podfollow+2
-
Ethical, Humane Design & Structural Change — Rather than seeking to ban AI entirely, Harris advocates reshaping how we build and deploy technology. This includes rethinking ownership models (e.g., “public benefit” corporations), demanding transparency, conducting safety audits, protecting whistleblowers, and designing systems with human flourishing — not just profit — in mind. Video Highlight+2Summapod+2
He frames this as akin to how society responded to other existential risks (e.g., tobacco, fossil fuels): once harm becomes clear and public pressure mounts, regulation and structural change become possible. But we need to act now, before “lock-in.” Video Highlight+2Podmarized+2
Evaluation: Strengths, Concerns, and Open Questions 
What the video does well
-
Urgent & clear AI warnings. Harris communicates a strong sense of immediacy. The “two-year window” framing is jarring but helps overcome inertia and complacency, which are arguably among the most significant barriers to public action.
-
Multi-dimensional risk framing. Rather than fixating on one problem (e.g., job loss), the discussion spans economic, societal, infrastructural, psychological, and existential dimensions — illustrating how deeply pervasive AI’s impact could be.
-
Ethical seriousness and agency. Harris rejects both naïve techno-utopianism and passive fatalism. Instead, he urges agency, collective responsibility, and thoughtful design/redesign of our relationships with technology. That framing seems more constructive than either panic or naivety.
What remains speculative — and why that matters
-
Timeline uncertainty. While the “2-year” warning is rhetorically powerful, predicting exactly when major disruptions will occur is inherently speculative. The pace of AI development, regulatory response, public adoption, and societal adaptation is influenced by many unpredictable factors.
-
Scale of impact vs. adaptability. Historically, technology-driven disruptions (the industrial revolution, the internet, automation) have led to displacement but also to the creation of new industries and jobs. It remains unclear whether future AI disruption will follow the same pattern or be fundamentally different.
-
Ambiguity of agency and governance. Harris calls for collective action and regulation — but does not deeply explore how such global coordination and governance could work, especially in geopolitically fractured or competitive contexts.
-
Psychological and societal assumptions. Claims about deskilling, loss of human autonomy, or people becoming overly dependent on AI companions are plausible but difficult to predict. The psychological and cultural responses of societies to AI may vary widely.
Why This Episode Matters: AI Warnings Matter!
This video is important — perhaps one of the more urgent interventions in public AI discourse in 2025. As AI capabilities accelerate, most public debate remains rooted in narrow economic or technical questions (jobs, regulation, innovation). What Harris brings — with urgency — is a broader horizon: power, governance, democracy, identity, human purpose.
If taken seriously, the warnings challenge not just how we build AI, but the kind of society we want to be. It reframes AI not as a tool or convenience, but as a lever of immense structural power — one that could reshape what it means to work, relate, govern, and be human.
Moreover, by stressing agency and collective action, the episode resists fatalism. It doesn’t say “disaster is inevitable.” Instead: “This is a fork in the road — we have a choice.” That sense of possibility, even amid danger, makes the message more actionable than many other dire forecasts.
AI Warnings – My Reflections: What I Think This Episode Adds — and What I Remain Unsure About
Listening to (or, more precisely, reading the publicly available summaries of) this episode, I find Harris’s argument compelling but also sobering. The combination of technical understanding (his background at Google/design ethics) plus moral urgency gives weight to the concerns. His metaphor of AI as a “power pump” is especially striking: it captures how AI isn’t just about automating tasks — it’s about re-allocating agency, leverage, and control.
However — and this is important — I believe the direst AI warnings (global collapse by 2027, 99% of jobs gone by 2030, humanity “locked in” to dystopia) should be taken as scenarios, not predictions. In complex systems — technology + society + economy + politics — outcomes rarely follow a single trajectory. There are many unknowns: regulation, societal adaptation, new cultural norms, shifts in values, and public awareness.
The real strength of this episode is as a wake-up call: a vivid, broad, cautionary framing that forces us to think — not just about “can we build powerful AI,” but “should we — and if so, how?”
My biggest hope is that voices like Harris’s can spur serious public discourse, global cooperation, and governance. Because if the risks are real, the cost of inaction may be far greater than most people currently imagine.
Conclusion/AI Warnings
“AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting!” is more than a podcast episode — it’s a call to conscience. It challenges listeners to confront the possibility that we stand at a threshold: one path leads to concentration of power, mass disruption, loss of autonomy, and existential risk; the other leads — potentially — to a future where technology is built not just for profit or efficiency, but for human dignity, flourishing, and collective well-being.
Whether you view some of Harris’s predictions as plausible or as worst-case scenarios, the magnitude and urgency of his arguments demand attention. At a minimum, the episode invites us to ask the questions many would rather avoid: who controls AI? Who benefits? And ultimately, who decides what our future looks like?
### 30 ###
My conclusions: Take the AI Warnings seriously!
Similar to what the guest, Tristan Harris, said: Send this to the ten most important people you know, whether they’re in education, government, your industry or profession, your church or civic group, or your circle of friends and family…and ask each of them to view and consider the content, and pass it along to the ten most important people they know. We must address AI Warnings now. We can be aware of AI Warnings and filter them to the people who can implement guardrails.
We heartily welcome and invite you to copy and use the content above with attribution by including this paragraph and a live link back to this post. Images are Creative Commons with attribution, embedded in the image files.
Regards,
Keith Klein
Organizer, Wisconsin Business Owners
Please contact us with questions. We do invite you to engage with us on social media (just not for immediate needs). Best to call, email or visit our site for the best response.
As always, if you like, you will find us on the following social media sites, among many others:


