SCHMIDT & THE EXTREME RISK AI POSES: QUESTIONS TO DISARM THE NEXT BIN LADEN & THE IDEA OF ROGUE STATES

Eric Schmidt, the former Chief Executive of Google, expressed concerns about the wrong use of Artificial Intelligence (AI) when he recently told the BBC in an interview that the ‘real fears that I have are not the ones that most people talk about AI—I talk about extreme risk’.
Schmidt’s fears are well-founded on the premise that rogue elements in the form of individuals or states could adopt and misuse AI for the wrong reasons, such as creating biological weapons.
The transformation AI brings to industries, economies, and societies is undeniable. Yet, AI also presents "extreme risk," particularly if misused by rogue states or malicious actors. From bioweapon development to cyber warfare, the risks surrounding the possible misuse of AI demand urgent discussion and action.
The Threat Landscape: AI in the Wrong Hands
Of primary concern is the potential for AI to be used in catastrophic ways. Schmidt specifically referred to Osama Bin Laden’s orchestration of the 9/11 attacks to mirror where an individual or group with malevolent intent exploits AI to cause harm. What could this look like in the modern era?
Biological Weapons: Could AI be used to design new pathogens or optimize the spread of existing ones?
Cyber Warfare: What happens if adversarial states leverage AI for large-scale cyber-attacks on critical infrastructure?
Autonomous Weapons: Should there be an international agreement restricting AI-driven warfare?
Balancing Regulation and Innovation
The main takeaway from Schmidt’s interview is the advocacy for government oversight while cautioning against overregulation. Striking this balance is crucial—too much regulation could stifle progress, while too little could lead to unchecked risks. So, what questions must we answer to balance regulation and innovation?
How much regulation is necessary to ensure safety without impeding innovation?
Should AI governance be handled nationally or through international cooperation?
Are current regulatory approaches, such as U.S. export microchip controls, sufficient?
The Role of Private Companies
Schmidt acknowledges that AI's future largely rests in the hands of private companies. This raises questions about accountability, transparency, and ethical responsibility:
How can governments ensure AI companies prioritize safety over profit?
Should there be mandatory ethical AI guidelines for corporations?
What mechanisms can ensure companies collaborate with regulators while maintaining competitive advantages?
How much control over AI development can founders, investors, and venture capitalists have?
The Global Perspective: Diverging Approaches
As the AI Action Summit showed, not all nations agree on AI governance. The U.S. and the UK recently refused to sign an AI regulation agreement, arguing that it could hinder technological progress. Schmidt warns that excessive regulation in Europe might prevent AI breakthroughs.
Should countries have a unified global AI governance framework?
What lessons can be learned from past technological revolutions, such as electricity and the internet, in crafting AI policies?
Conclusion: A Call for Thoughtful AI Governance
Eric Schmidt’s warnings should serve as a wake-up call. While AI holds immense promise, it also introduces existential risks that demand proactive mitigation. Governments, corporations, and civil society must work together to ensure that AI’s advancement remains safe, ethical, and beneficial to humanity.
What steps should be taken today to prevent AI from being misused in the future? The conversation is urgent, and the stakes have never been higher.