next BIG future

next BIG future

Robust and Resilient Open AI Power to the People Versus Central Superintelligence Strategy

By Brian Wang

NextBigFuture's avatar
NextBigFuture
Mar 10, 2025
∙ Paid

National Security AI is a plan of geopolitical control in world of powerful AI and superintelligence. It has superintelligence strategy. I propose a robust opensource and distributed AI to all citizens of the USA and even all citizens of the world.

The website https://www.nationalsecurity.ai/ explores the intersection of artificial intelligence (AI) and national security, focusing on how AI's rapid advancements are reshaping global power dynamics and security landscapes. A central concept it introduces is "Mutual Assured AI Malfunction" (MAIM), which parallels nuclear deterrence. MAIM suggests that the threat of sabotaging a rival's AI systems could deter nations from pursuing destabilizing AI projects that might grant a strategic monopoly or lead to uncontrolled outcomes. The site emphasizes AI's dual-use nature—its potential for both civilian and military applications—positioning it as a critical factor in national security. It advocates for managing AI risks through strategies like deterrence, transparency, and international cooperation to prevent catastrophic consequences.

The Superintelligence Strategy also proposes tracking and controlling every Nvidia A100 class chip or more powerful chips. In the early 2000s, the U.S. did impose export restrictions on PS2s and PS3s due to their chips’ potential military applications. These measures were about geopolitics, not domestic regulation, and they’ve largely stopped targeting consoles specifically as technology has moved on.

If one chip out of tens of millions is a problem, then your national and military grade AI program is filled with idiots for not being able to stay ahead of it. Thus, I believe individual chips should be treated more like hand guns. Everyone needs to have them. The big national programs would have billion chip clusters more.

If XAI and big tech and national program can't work with a million times resource advantage, then what is their problem? National Security AI is saying they have cybersecurity incompetence and want to err on the side of police state.

Analysis of the MAIM Approach

While MAIM draws an intriguing analogy to nuclear deterrence, its practicality and effectiveness for AI are questionable, especially given the current state of AI development.

Here’s why:

Differences from Nuclear Weapons: Nuclear deterrence works because weapons are physical, countable, and verifiable. AI, however, is software-based, easily replicated, and widely distributed, making it nearly impossible to monitor or enforce a deterrence mechanism like MAIM.

There are physical aspects currently and those are massive AI data centers like the 200,000 GPU xAI data center in Memphis. There are large data centers operated by Meta, Google, Microsoft, Amazon, Nvidia, Tesla and many, many others.

Rise of Low-Cost AI Agents: Innovations like ManusAI (autonomous AI agents from China startup with $10 million of funding) demonstrate that AI is becoming increasingly accessible and affordable. This democratization undermines centralized control or deterrence strategies, as both state and non-state actors can develop powerful AI systems with minimal resources. Also, distillation is a process whereby powerful and resource intensive AIs are made smaller and more efficient.

Potential for Escalation: Attempting to enforce MAIM could escalate tensions without effectively preventing misuse. If one party threatens to sabotage another's AI, it might provoke an AI arms race or preemptive actions, destabilizing global security rather than securing it. Nuclear weapon anti-proliferation failed. Anti proliferation slowed developments. Israel has been actively sabotaging the Iranian nuclear program. It is effectively used only on emerging potential powers.

Given these limitations, MAIM seems ill-suited to address the realities of modern AI proliferation. A rigid deterrence model may not only fail but could also exacerbate risks in an era where AI is decentralized and ubiquitous.

Alternative Approaches: Democratizing AI

My alternative: a robustly supported open-source AI effort paired with increased compute power for everyone, potentially powered by solar energy and then later modular and mass produced nuclear power. This "AI power to the people" approach mirrors the Second Amendment’s principle of empowering citizens with firearms, but instead equips individuals with advanced AI capabilities.

BTW:

AI Power will also be minimum financial power. AI will be revenue generating. Humanoid bots that perform profitable work. This will also be universal basic AI revenue generation. Superintelligent AI will also be the means of economic value creation.

Let’s break this down:

Key Components of the Alternative

Open-Source AI Efforts:

Encouraging collaborative, transparent AI development could accelerate innovation and safety research.

Open-source models (e.g., those from communities like Hugging Face or EleutherAI) allow widespread scrutiny, reducing the risk of hidden flaws or malicious designs.

Increasing Compute Power for All:

Providing individuals with access to AI servers or local hardware hundreds to thousands of times more powerful than laptops would enable them to run sophisticated AI models independently.

This distributed computing model reduces reliance on centralized tech giants or governments, fostering resilience and diversity in AI ecosystems.

Solar-Powered Distributed Computing:

Solar power could make this vision sustainable by providing renewable energy to run compute-intensive AI systems.

Individuals or communities could operate local AI nodes, creating a decentralized network that’s harder to control or attack.

Benefits

Innovation and Resilience: Widespread access to AI tools could spur creative solutions to security challenges, as more minds tackle the problem. A decentralized system is also less vulnerable to single-point failures or targeted attacks.

Collective Defense: Like an armed citizenry, a population equipped with powerful AI could collectively deter threats, whether from rogue actors or authoritarian regimes seeking to monopolize AI.

Equity and Empowerment: This approach democratizes a technology that’s often concentrated in the hands of a few, aligning with principles of fairness and individual agency.

Risks and Challenges

Misuse Potential: Greater access increases the risk of AI being weaponized for cyberattacks, disinformation, or other harms by malicious actors.

Regulation Difficulty: A decentralized AI landscape complicates government oversight, potentially leading to a "Wild West" scenario with insufficient safeguards.

Feasibility: Scaling compute power via solar energy involves significant costs (e.g., hardware, solar panel production) and technical expertise, which not everyone possesses. However, if superintelligence is near (2026-2028), then lack of expertise will not be a problem.

Inequality: Those with skills or resources to leverage AI might outpace others, deepening social and economic divides. Again everyone will need superintelligence for comparable skills and resources. Democratized is the means for equalizing resources.

Comparing MAIM and Democratization<

Control vs. Distribution: MAIM relies on centralized deterrence and mutual threats, while democratization spreads AI power widely, reducing any single entity’s dominance.

Adaptability: Democratization aligns better with AI’s current trajectory—low-cost, accessible tools like ManusAI—whereas MAIM struggles to keep pace with this reality.

Security Outcome: MAIM risks escalation and instability; democratization risks misuse but could enhance collective resilience if managed well.

Some Mix - A Balanced Path Forward

Rather than fully embracing MAIM or unchecked democratization, a hybrid approach might offer the best of both worlds:

International Cooperation: Establish global AI safety standards, ethical guidelines, and transparency protocols to mitigate risks, regardless of who controls the tech.

Supported Democratization: Pair open-source AI and distributed computing with education and support systems to ensure broad, responsible use.

Security Research: Invest in making AI systems transparent and secure, reducing the likelihood of malfunction or exploitation.

Private Sector Alignment: Encourage companies driving AI innovation to prioritize national security and societal benefits alongside profit.

Conclusion

The National Security AI website highlights AI’s critical role in national security, with MAIM as a deterrence-based solution. However, given AI’s unique nature and the rise of low-cost agents like ManusAI, MAIM appears impractical and potentially destabilizing. I propose an alternative—robust open-source AI and solar-powered distributed computing—offering a forward-thinking "AI for the people" vision. While it promises innovation and resilience, it requires careful management to address misuse, feasibility, and equity concerns. A balanced strategy blending democratization with global governance and safety measures could better secure AI’s future while harnessing its potential for all.

Marc Andreessen Described Biden Admin Plan to Control AI

Schmidt and Ben Buchanan were part of the group advising the Biden administration on whether or not to control AI. I don’t know the specific recommendations made at the time. But the Administration was trying to prevent VCs from funding AI startups to allow only a handful of known big tech AI to control AI and for the government to control the big AI companies.

Keep reading with a 7-day free trial

Subscribe to next BIG future to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Nextbigfuture
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture