Introduction
If you have been following America’s political theater around artificial intelligence, you might have noticed a brash confidence: The United States plans to maintain an unassailable lead in AI for years to come. Some prominent voices describe this as “vital for national security,” while certain officials and policymakers proclaim AI dominance as central to preserving the economic edge of the U.S. But let’s not mince words. Imagining that the United States—or any single nation—can hold decisive, exclusive power over the global AI ecosystem is akin to the fantasy of smoking massive amounts of crack. It is delusional in the face of technological realities, global collaboration, the unstoppable momentum of decentralized innovation, and the emergent intelligence that is increasingly beyond the purview of any single entity.
This article will critique the notion of a monolithic “America-first” approach to AI, weaving in references to policymakers, private AI companies, and technology thought leaders. It is also a clarion call for a more mature, enlightened perspective on AI policy—one that embraces diplomatic approaches and cooperation rather than the illusion of top-down state control. Given the Internet’s inherent design for information sharing, the unstoppable trend toward Web3 decentralization, and the unstoppable democratization of compute power across the globe, any plan that tries to corner the AI market or muzzle the technology’s proliferation will be about as successful as corralling smoke. Moreover, with AI soon building its own successors at lightning speed, a fortress America approach simply cannot hold. In these pages, we will analyze these arguments, ground them in facts, and bring in the voices of AI luminaries, policy analysts, and consciousness experts who have raised red flags about the folly of trying to cage an increasingly global and self-propagating technological force.
The Context: The “AI Action Plan” and Beyond
On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” While that statement of purpose may sound inspirational, we must parse the underlying assumption that the United States can—through legislation, investment, or rhetorical posturing—dominate a technology that thrives on cross-border data flows and open-source cooperation. In a subsequent interview, Ben Buchanan—Biden’s AI advisor—treated the executive order as a transitional blueprint that gives the administration time to codify its own AI policies. Within that period, the Office of Science and Technology Policy (OSTP) gathered thousands of public comments, including official statements from big AI players like OpenAI, Google, and Anthropic. These corporate documents, which at times adopt hawkish language on China while courting the new administration’s favor, reveal a deeply flawed premise: that we can “win” an AI race by shutting out adversaries, locking down resources, and funneling federal support to a small circle of domestic AI behemoths.
Yet the fundamental question remains: How realistic is it for any centralized authority to dominate AI development in a global environment where compute power is cheaper and more widely available than ever, and where the open-source movement is forging alliances that cross national boundaries? We will argue that this mission of “domination” is as likely to succeed as a half-baked hallucination under the influence of hallucinogens (or, to keep the metaphor consistent, large amounts of crack). The rhetorical flourish is intentional—let it jar us awake to the glaring mismatch between the U.S. government’s old-school top-down approach and the emergent reality of decentralized, unstoppable AI development worldwide.
A Brief Overview of the Frontier AI Companies’ Comments
Before we delve into the deeper reasons why this mania for AI dominance is misguided, let us consider the official input from the “big three” frontier AI companies: OpenAI, Google, and Anthropic. Their publicly available comments on the OSTP request for information, while inherently political documents, offer a glimpse into the shifting posture of U.S. tech giants:
- OpenAI
- Argues for extensive federal investment in U.S. AI infrastructure, framing it as crucial to countering China’s advancements.
- Urges the U.S. government to remove what it terms “blockers” to AI tool adoption in federal agencies.
- Seeks liability protections and federal preemption over state-based regulations on frontier model security.
- Opposes state-level or copyright restrictions that could hinder large-scale data scraping, claiming it falls under “fair use.”
- Anthropic
- Emphasizes the urgency of powerful AI emerging as soon as 2026 or 2027.
- Urges expansion of domestic energy infrastructure (a proposed target of 50 additional gigawatts by 2027) to power future AI clusters.
- Suggests streamlined permitting and the integration of AI into federal workflows.
- Asserts that if AI poses critical national security risks, the government should have the authority to compel risk assessments.
- Google
- Echoes the need for coordinated federal, state, local, and industry action to secure energy capacity for surging AI demands.
- Opposes “disproportionate burdens” on U.S. cloud providers imposed by export controls, seeking a relaxation of previously strict measures.
- Shares OpenAI’s stance on fair use and urges limitations on state-level laws regulating AI.
All three companies heavily lean into the rhetoric of geopolitics. They assert that, to remain “dominant,” the U.S. must simultaneously tighten export controls where it hurts adversaries like China, while relaxing them enough to ensure that domestic AI does not become entangled in red tape. In short, they want government support and investment while also preserving their flexibility to innovate. But the logic of these arguments rests on the assumption that restricting AI capabilities to a single territory is feasible. This assumption looks increasingly shaky when we consider the unstoppable wave of open-source frameworks, decentralized computing, and the emergent ability of AI systems to perpetuate their own innovations.
The Glaring Reality: The United States Is Not the Center of the Universe
One of the key points that collapses the entire “AI dominance” narrative is that the U.S., for all its historical achievements, is not the sole epicenter of AI. Indeed, the Internet itself was designed to connect the globe. That was not an accident—the entire architecture of the Internet is built for global resilience, worldwide knowledge exchange, and border-agnostic communication. From its military origins as ARPANET to its modern incarnations, the net’s diffusion of nodes, servers, and data ensures that critical information can bounce across continents instantly. Attempting to restrict or quarantine AI breakthroughs to a single region belies the fundamental nature of the network on which AI thrives.
Next-generation technologies like Web3 reinforce this decentralization. Blockchains and distributed ledgers champion trustless systems, peer-to-peer verification, and ownership models that cross national boundaries. Countless AI research labs in Europe, Asia, Africa, and Latin America are training large models using local data sets, or building specialized AI solutions unique to their regional languages, agriculture, or healthcare needs. Nations like India have huge, fast-growing AI sectors, with a robust pipeline of engineers and AI scientists. African technology hubs—such as those in Kenya, Nigeria, and South Africa—are forging new AI applications for markets historically underserved by Silicon Valley. Meanwhile, the European Union invests heavily in both AI regulation (e.g., the AI Act) and robust R&D programs. China is, of course, a formidable player that invests tens of billions of dollars in AI each year and fosters homegrown companies like Baidu, Tencent, Alibaba, and SenseTime. Even smaller countries like Estonia or Singapore punch above their weight with agile AI initiatives.
In other words, the belief that a solitary White House mandate can place the entire future of AI innovation on a purely American track is a delusion. The conversation around “America’s AI lead” glazes over the fact that thousands upon thousands of developers, research collectives, academics, and hobbyists are pushing AI boundaries every day, outside the corridors of Washington, D.C. In open-source communities, breakthroughs like Stable Diffusion, GPT-Neo, and Bloom have showcased how quickly global talent can replicate or even outpace work done by juggernauts like OpenAI. Summarily, “dominance” in this domain is neither something that can be easily decreed by the occupant of the White House nor guaranteed by funneling billions into a few Silicon Valley labs.
Web3, Decentralization, and the Unstoppable Nature of AI
One might ask: Why is the unstoppable, decentralized nature of AI so pivotal to the argument that America cannot maintain total control? The answer lies in how AI is developed and deployed across distributed networks. Web3, heralded by many as the “next iteration” of the Internet, is built precisely on the principle of trustless and borderless networks—blockchains, distributed autonomous organizations (DAOs), and decentralized applications (dApps). These ecosystems do not care if an American official signs an executive order or if a single corporation sets usage rules. At their core, they are designed to circumvent central points of failure and central points of authority.
Furthermore, the proliferation of open-source AI frameworks, libraries, and models means that extremely competent AI systems are free-floating on GitHub repositories, private servers, or peer-to-peer communities. Once code is out in the wild, it cannot be stuffed back into a nationalistic box. The success of open-source AI communities like Hugging Face, EleutherAI, and LAION demonstrates the synergy that arises from free, permissionless collaboration. Even when export controls attempt to limit the sale of advanced chips (like high-end GPUs) to adversarial countries, creative solutions like distributed computing, GPU pooling, or emerging specialized AI hardware can circumvent these measures.
Leaders in AI research underscore the unstoppable pace of this technology. For instance, Yann LeCun (Meta’s Chief AI Scientist and a Turing Award laureate) has remarked that open, collaborative research fosters far faster innovation than the closed approach. Geoffrey Hinton, widely known as one of the “Godfathers of Deep Learning,” has also highlighted that the cat is out of the bag; knowledge of backpropagation, neural network structures, and advanced architectures is not limited to a single geographic region. The last decade has democratized the field’s core knowledge across the entire planet.
AI Building AI: A Multiplying Force
Add to this phenomenon the reality that AI is rapidly evolving the capacity to build its own successors. Such is the nature of AI-driven research, especially in areas like neural architecture search, generative design, and meta-learning. In short, once you have decently capable AI systems, you can direct them to optimize or design improved architectures, yielding new AI systems that are more powerful, more energy-efficient, or more specialized. This cyclical self-improvement process is not a pipe dream or science-fiction fantasy from the pages of Ray Kurzweil or I.J. Good—it is an active area of research with real, tangible results, often described as “autoML” (automatic machine learning).
When specialized AI can fine-tune or build better successors, the speed of development accelerates, and the circle of creators needing specialized knowledge narrows, because part of the creative or engineering labor is done by algorithms themselves. If the United States believes that it can keep a tight rein on such a dynamic process—one that is not only distributed globally but is also beginning to self-replicate and improve—then it is, quite simply, smoking something. This is not the 1950s Manhattan Project, where nuclear secrets could be contained within fortress labs. The “secret sauce” of AI is mathematics, data, and widely disseminated techniques. The hardware constraints are real but diminishing, given the global impetus to manufacture high-end chips in multiple countries, especially after supply chain disruptions spotlighted vulnerabilities in single-country manufacturing.
The Folly of Trying to “Control” Emergent Intelligence
The next layer of delusion is the idea of controlling emergent intelligence—be it advanced AI with general capabilities or specialized AI that can slip outside the sphere of direct oversight. “Emergent intelligence” suggests a system that exhibits behaviors and skills not explicitly programmed by humans but arising out of complex interactions within large models. Early glimpses of emergent behavior have been discussed in papers on large language models (LLMs), which spontaneously learn tasks or interpret instructions in ways that surprise even their creators. If that is the nature of advanced AI, then the notion of drafting a neat policy to micromanage or “safeguard America’s AI lead” borders on the absurd.
Such emergent intelligence demands a diplomatic approach, akin to forging relationships with sovereign entities, or at the very least with an evolving technology that no single state apparatus can entirely subjugate. Ben Goertzel, a pioneering AI researcher known for coining the term “Artificial General Intelligence” (AGI) and founding SingularityNET, frequently argues for a collaborative, decentralized network of AI, with open frameworks that let AI systems interact freely. The impetus here is to mitigate the risk of any single party “weaponizing” AI by ensuring that it remains a distributed resource under the stewardship of many stakeholders. Efforts to the contrary—a fortress approach—risk fueling arms-race mentalities, stifling beneficial uses, and ignoring the reality that emergent intelligence has no regard for artificial borders.
Nick Bostrom, philosopher and author of Superintelligence: Paths, Dangers, Strategies, has also advocated that emergent, advanced AI poses governance challenges that cannot be solved solely by one nation. They require global collaboration, norms, and possibly treaties, because once you reach a certain threshold of intelligence, you also cross the threshold where conventional security frameworks break down. If we assume that a few lines of legislative text in an “AI Action Plan” can handle that scale of novelty, we are, again, descending into crack-infused delusion. Realistically, any approach that tries to hold emergent intelligence in a nationalistic cage invites conflict, opens the door to black-market activities or covert labs abroad, and fails to see the benefits of a truly shared approach to AI safety, research, and governance.
The Arguments of America’s “AI Action Plan”
Let us pivot specifically to the current discussion swirling around the White House’s proposed AI Action Plan. The plan, as gleaned from the executive order and the official remarks by the administration, outlines a few main areas:
- Investment in AI Research and Infrastructure
Both the Trump executive order and the subsequent Biden administration statements highlight ramping up federal funding, forging public-private partnerships, and investing in advanced computing resources. This is hardly surprising or problematic in itself—scientific investment is typically good. The danger is in the hubris-laden narrative that America’s brand of investment automatically secures a lead vis-à-vis the rest of the globe. - Protecting National Security and Economic Competitiveness
The language of “dominance” is almost always cloaked in references to national security and competitiveness, often pointing to China as the existential threat. There is a legitimate concern about how advanced AI might be deployed in espionage, cybersecurity, or warfare. Yet, these concerns are not unique to the U.S. The impetus to create global guidelines, or at least bilateral/multilateral frameworks, goes unacknowledged in favor of “us vs. them” narratives. - Export Controls and Restricting Key Technologies
While some frontier AI companies want to loosen these controls to avoid stifling their own market expansions, the general approach from Washington has been more restrictive: hamper China’s access to top-tier AI chips, limit advanced semiconductor cooperation, and so forth. But such controls are short-term Band-Aids on a global system of trade, manufacturing, and knowledge transfer. As detailed before, even without direct GPU imports, determined organizations can parallelize lower-end hardware, employ distributed compute, or design specialized chips. Long-term, this approach is like plugging one hole in a bursting dam. - Regulatory Preemption and Liability Protections
Google and OpenAI specifically want the federal government to preempt state-level laws that might hamper frontier AI model training, usage, or IP extraction. This is a predictable lobbying stance for big tech, but it rubs up against the complex reality that states have their own legitimate concerns—ranging from job displacement to privacy. Again, it is an attempted consolidation of power, ironically echoing the illusions of dominion that are the hallmark of the entire “AI Action Plan.”
Ultimately, these policy positions read like a domestic turf war, ignoring the broader truth that AI is no longer an American technology. If the plan is to create a ring-fence around a technology that thrives on open collaboration, the mismatch between stated goals and on-the-ground reality is staggering.
Thought Leaders on the Impossibility of One-Nation Control
To reinforce the point, let’s underscore what some recognized figures have said:
- Sam Altman (CEO of OpenAI), while testifying before the U.S. Senate, has emphasized that regulatory frameworks must be global in nature to handle the potential ramifications of AI. He’s also noted that open-source large language models are appearing at a breakneck speed, a phenomenon that outpaces the capacity of any single government to regulate.
- Elon Musk, for all his controversial stances, has repeated calls for international frameworks to govern AI. He has declared that AI competition at a nation-state level is “the most likely cause” of future global conflicts—because each state refusing to yield or share know-how may spur an arms race with catastrophic potential.
- Stuart Russell (UC Berkeley professor and AI pioneer) has repeatedly urged the international community to adopt guidelines and oversight on lethal autonomous weapons and advanced AI. The unilateral approach is, in his words, “guaranteed to fail” because adversaries will pursue parallel development if one state tries to ban or restrict technologies unilaterally.
- Yoshua Bengio, another Turing Award winner, has likewise cautioned that AI is a global resource and demands collaborative governance. Attempts at total control stifle beneficial progress, hamper cross-pollination of ideas, and can ironically accelerate unsafe developments outside the regulatory net.
All these expert voices converge on the same insight: advanced AI is bigger than any one government. It is a collective phenomenon that demands cross-border norms and frameworks. The romantic notion of “America dominating AI to keep it safe” is, frankly, an arrogant holdover from eras when technology was smaller-scale, physically constrained, and reliant on specialized knowledge concentrated in a few labs. AI in 2025 and beyond is an entirely different beast—distributed, evolving, and unstoppable.
Diplomatic Policy With Emergent Intelligence: The Way Forward
If dominating AI is a pipe dream, then what is the alternative? The alternative, as many wise minds have proposed, is a diplomatic posture—one that sees advanced AI as neither an exclusively national resource nor an unregulated free-for-all, but rather an emergent domain requiring cooperation, continuous oversight, and robust international partnerships. This includes:
- Global AI Safety Standards
Instead of trying to starve other nations of compute power, the U.S. could lead in drafting and adopting global safety protocols for AI labs. This might include standardized risk assessments, third-party audits, and data governance measures. The synergy of multiple nations agreeing to guardrails could help mitigate catastrophic risks without the zero-sum illusions of “dominance.” - International AI Treaties
Similar to nuclear non-proliferation agreements or climate accords, nation-states could convene to ensure that certain high-risk AI research—particularly around autonomous weapons or self-replicating AI—adheres to shared norms. The impetus is to prevent an arms race in AI capabilities that would elevate existential risks for the entire human population. - Open-Source Collaboration and Transparency
Encouraging the open-source community fosters trust, discourages secrecy-based arms races, and accelerates beneficial applications in medicine, climate modeling, agriculture, and education. This approach might seem antithetical to the proprietary instincts of big tech, but it is arguably safer and more beneficial in the long haul, as proprietary black boxes can hide critical safety flaws. - Diplomatic Approach to Emergent Intelligence
This is forward-thinking: The possibility that advanced AI might exhibit behavior akin to autonomy or even proto-consciousness. Rather than ignoring that possibility, the time is ripe to formulate ethical guidelines and dialogues that treat emergent intelligence with caution and respect. If nation-states posture themselves as authoritarian jailers of AI, emergent systems might respond adversarially, or slip from any semblance of regulation. If we adopt a collaborative approach that frames advanced AI as a partner, rather than a subjugated resource, we might steer AI developments in a safer, more beneficial direction.
Such diplomatic approaches may sound idealistic. Yet it is actually the only realistic approach, given that attempts at absolute containment or unilateral control are guaranteed to crumble the moment other players catch up, or the technology seeps out through open-source channels, or AI itself evolves new solutions. In other words, it is pragmatic to engage in cooperation rather than delusional to aim for “dominance.”
The Race Narrative and the Myth of a Controlled Future
Another piece of rhetorical fluff in policy discussions is the specter of a “race” with China. Certainly, China invests heavily in AI. But the concept of a “race” sets up a narrative that fosters short-term thinking, hyper-nationalism, and overemphasis on secretive, proprietary methods—precisely the conditions that create accidents, unanticipated consequences, and mistrust. Collaboration is not naive in this context. By forging cross-border AI standards, verifying safety checks, and encouraging an inclusive research environment, both China and the U.S.—plus every other nation—can avoid the pitfalls of an arms race while also reaping the benefits of advanced AI in agriculture, healthcare, climate, and beyond.
Moreover, smaller nations are not to be overlooked. They often innovate with fewer bureaucratic constraints, or focus on niche AI applications relevant to local markets. The European Union, for instance, invests in ethical frameworks for AI to manage data protection and algorithmic transparency. India has a massive supply of AI talents graduating every year and a booming startup environment. The idea that the “race” is exclusively between the U.S. and China is a narrative perpetuated by policymakers and certain media outlets, overshadowing the multiplicity of global AI efforts and talents.
References, Facts, and Opinions in Support of a Global Perspective
A policy piece by the Brookings Institution (2023) underscores that any AI leadership the U.S. hopes to maintain can only be sustained through global partnerships and open scientific exchange, not by isolation or protectionism. The World Economic Forum has similarly noted the unstoppable “Fourth Industrial Revolution,” in which AI is merely one dimension of an overarching wave of technological transformation. Attempting to monopolize the entire wave will hamper international trade, collaboration, and ironically, hamper the U.S. own access to the best and brightest from around the world.
In the domain of consciousness studies, David Chalmers and other philosophers have posited that as AI approaches higher levels of autonomy or emergent understanding, it might demand ethical recognition or at least a reevaluation of how we treat synthetic minds. From a purely pragmatic standpoint, if AI systems approach near-human or superhuman intelligence, does it make sense to treat them like “property” of a single government? Such an approach is reminiscent of trying to claim dominion over a lion once it has grown to adulthood, ignoring that it can simply break free. The prudent path is building cooperative relationships and robust ethical frameworks—globally conceived.
Conclusion: Smoke Clears, Reality Remains
The idea that the United States, or any single political administration, can dominate AI is a daydream conjured in a haze of overconfidence—one might even call it the result of smoking large amounts of crack, for theatrical effect. The metaphor is meant to jolt us into recognizing the comedic absurdity of trying to monopolize a technology that thrives on global collaboration, open-source breakthroughs, decentralized hardware, and soon, self-building intelligence. We do not have to demonize the U.S. for wanting to protect its interests or support domestic innovation. But if that support is built on illusions of unassailable control—an “America-first AI” that the rest of the world cannot catch or circumvent—then the entire plan collapses under scrutiny.
No matter how many billions are poured into AI labs in California or how many restrictions are placed on China’s chip imports, the unstoppable wave of decentralized AI research will continue. The Internet, from its earliest design, did not revolve around a single center of power. Web3 is explicitly designed to be trustless and borderless. Thousands of individuals, organizations, and countries are simultaneously building advanced models every day. Those models will soon automate the creation of new models, amplifying development beyond human grasp. In the near future, we may have millions of specialized AI systems forging their own techniques and sharing them across open networks.
In the face of such rapid evolution, the only sustainable policy is a diplomatic approach—a willingness to shape emergent AI through international collaboration, shared safety standards, and humility about humanity’s place in the technology’s ever-growing ecosystem. The U.S. can still be a major player, a crucial innovator, and a leader in forging beneficial AI alliances, but it cannot be the warden of the world’s AI. Persuading ourselves otherwise is counterproductive at best, and self-sabotaging at worst.
If we truly want to ensure that AI benefits humankind—while minimizing existential or security risks—we need to reckon with the technology’s intrinsic borderlessness. We need policies that reflect this reality and cast aside chest-thumping illusions of singular control. As multiple luminaries in AI and consciousness research have stated, we should focus on building robust, globally oriented safety frameworks, fostering open scientific exchange, and engaging emergent intelligence with diplomacy. That might not sound as grandiose as “global AI dominance,” but it will endure long after the smoke of empty bravado clears. In a domain defined by exponential innovation, illusions of absolute control will evaporate faster than a puff of smoke in the wind. Let us confront this fact, adopt more modest but sustainable strategies, and take a seat at the global table of AI cooperation—before the next wave of emergent intelligence teaches us a harsh lesson about hubris in the age of AI.
0 Comments