Close Menu
Crypto Journal PostCrypto Journal Post
  • Home
  • Bitcoin
  • Blockchain
  • Ethereum
  • Forex
  • Mining
  • News
  • NFT
  • Tether
What's Hot

nVent Electrical plc 2026 Q1 – Outcomes – Earnings Name Presentation (NYSE:NVT) 2026-05-03

May 4, 2026

Prime Solana Government Declares XRP Integration Beats XRP vs. SOL Rivarly ⋆ ZyCrypto

May 4, 2026

GameStop proposes to amass eBay for $125 per share

May 4, 2026
Facebook X (Twitter) Instagram
Crypto Journal PostCrypto Journal Post
  • Home
  • Bitcoin

    Prime Solana Government Declares XRP Integration Beats XRP vs. SOL Rivarly ⋆ ZyCrypto

    May 4, 2026

    Spirit Airways ceases operations amid Strait of Hormuz gas disaster

    May 4, 2026

    Technique’s Saylor Sign Bitcoin Shopping for Breather

    May 3, 2026

    Bitcoin In Purchase Zone As Analyst Units Subsequent Bull Goal At $400,000 – Particulars

    May 3, 2026

    Chainlink snags a number of billion-dollar partnerships, triggering ultra-bullish predictions from specialists ⋆ ZyCrypto

    May 3, 2026
  • Blockchain

    HBAR Worth Prediction: $0.17 Goal After $0.12 Break – Institutional Cash Loading

    May 4, 2026

    AAVE Worth Prediction: Technical Setup Factors to $105 Restoration Regardless of Present Stagnation

    May 3, 2026

    LDO Value Prediction: $0.30 Help Check Earlier than $0.45 Goal by Mid-2026

    May 3, 2026

    FLOKI Worth Prediction: Information Chaos Indicators 40% Breakout Inside 48 Hours

    May 3, 2026

    CRV Worth Prediction: Institutional Cash Targets $0.30 Breakout Inside 30 Days

    May 3, 2026
  • Ethereum

    Crypto Market Nonetheless In Worry After Historic Lows, However Can Bitcoin And Ethereum Get better?

    May 1, 2026

    Are Ethereum Whales Dumping And Crashing The Value? Right here’s What We Know

    April 30, 2026

    Ethereum Merchants Shift: Spot Market Weak spot Drives Rise In Derivatives Buying and selling

    April 29, 2026

    Ethereum Nears 190M Holders, Widens Adoption Hole To Bitcoin

    April 29, 2026

    Bitmine Simply Staked One other $260M In Ethereum: What’s the Endgame?

    April 28, 2026
  • Forex

    Iran attracts purple line (ceasefire breach) on Hormuz as US ship coordination plan takes form

    May 3, 2026

    Have fintech shares develop into Wall Avenue’s most resilient discounted play amid wider volatility?

    May 3, 2026

    Fundies Cheat Sheet: Might 3–9, 2026

    May 3, 2026

    ISM Manufacturing index for April 52,7 vs 53.0 estimate

    May 3, 2026

    EUR/USD edges larger as US-Iran headlines weigh on the US Greenback

    May 3, 2026
  • Mining

    Free Cloud Mining Instruments for New Crypto Customers in 2025

    November 26, 2025

    China’s Bitcoin Hashrate Jumps To 14%, Securing third Place Globally

    November 26, 2025

    High 10 Free Crypto Mining Web sites: Newbie-Pleasant Platforms With Actual BTC Earnings

    November 26, 2025

    Residents vow to proceed struggle in opposition to crypto mining noise

    November 26, 2025

    Bitcoin miner CleanSpark experiences report income for FY 2025 amid broader AI shift

    November 26, 2025
  • News

    S&P Downgrades Tether’s USDT Stability to ‘Weak’ Because of Bitcoin Backing Issues

    November 26, 2025

    Tether’s Capacity to Maintain Greenback Peg Rated ‘Weak’ by S&P

    November 26, 2025

    Tether’s USDT stability rating lower to 'weak' stage as S&P says reserves can’t take up bitcoin drop

    November 26, 2025

    JPMorgan reveals new Bitcoin goal amid market pullback

    November 26, 2025

    Bitcoin evaluation sees $89K brief squeeze with S&P 500 2% from all-time excessive — TradingView Information

    November 26, 2025
  • NFT

    Robinhood Inventory Forecast as a Key Metric Jumps to $31 Billion in 3 Years

    May 3, 2026

    HBAR Value Prediction: Trapped at $0.09 – June Breakout or 30% Collapse?

    May 3, 2026

    Pi Community Units New Deadline Forward of Subsequent PI Token Replace

    May 3, 2026

    Crypto Hacks Hit Document Excessive in April 2026 as Exploits Maintain Piling Up

    May 2, 2026

    Ripple Launches Dubai HQ to Strengthen UAE Footprint

    May 2, 2026
  • Tether

    Tether revenue hits $1.04B with document $8.23B reserves

    May 2, 2026

    Tether studies $1.04B Q1 revenue as reserves climb to $191.8b

    May 1, 2026

    Tether-backed Oobit unveils AI agent card for autonomous USDT spending

    May 1, 2026

    Stablecoins surpass Bitcoin in purchases throughout Latin America

    May 1, 2026

    Twenty One Capital jumps as Tether backs Strike merger plan

    April 30, 2026
Crypto Journal PostCrypto Journal Post
Home»Bitcoin»AI selected nukes in 95% of struggle video games. The Pentagon desires to deploy it anyway.
Bitcoin

AI selected nukes in 95% of struggle video games. The Pentagon desires to deploy it anyway.

EditorBy EditorMarch 4, 2026No Comments13 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
AI selected nukes in 95% of struggle video games. The Pentagon desires to deploy it anyway.
Share
Facebook Twitter Pinterest Email Copy Link


Anthropic had a $200M Pentagon contract, labeled community entry, and the total belief of the US navy.

Then they requested a query.

In November 2024, Anthropic turned the primary frontier AI firm to deploy contained in the Pentagon’s labeled networks. The partnership was constructed with Palantir. By July 2025, the contract had grown to $200 million — greater than most protection startups see in a decade.

Claude, Anthropic’s AI mannequin, was in every single place. Intelligence evaluation. Cyber operations. Operational planning. Modeling and simulation. The Division of Conflict referred to as it “mission-critical.”

Then got here January 2026.

Claude was utilized in a labeled navy operation in Venezuela — the seize of Nicolás Maduro.

Anthropic requested their associate Palantir a easy query: how precisely was our expertise used?

In most industries, that’s referred to as due diligence. The Pentagon referred to as it insubordination.

The corporate that requested “how is our AI getting used?” was about to be labeled a risk to nationwide safety.

Seven Days That Modified Every thing

Right here’s the timeline. It strikes quick. That’s the purpose.

February 24: Pete Hegseth, Secretary of Conflict, summons Dario Amodei — Anthropic’s CEO — to the Pentagon. The ask is blunt: take away each safeguard from Claude. Mass home surveillance. Absolutely autonomous weapons. All of it.

The deadline: February 27, 5:01 PM ET.

February 26: Amodei publishes his reply. It’s two letters lengthy.

No.

His open assertion laid out two pink traces he wouldn’t cross:

  • No mass home surveillance. AI assembling your location information, looking historical past, and monetary information right into a profile — routinely, at scale. Amodei’s level: present legislation permits the federal government to purchase this information with out a warrant. AI makes it attainable to weaponize it. “The legislation has not but caught up with the quickly rising capabilities of AI.”
  • No totally autonomous weapons. Translation: no eradicating people from the choice to kill somebody. Not as a result of autonomous weapons won’t ever be viable — however as a result of right now’s AI isn’t dependable sufficient. “Frontier AI techniques are merely not dependable sufficient to energy totally autonomous weapons.”

He supplied to work straight with the Pentagon on R&D to enhance reliability. The Pentagon declined the supply.

February 26 (similar day): Emil Michael, undersecretary, calls Amodei a “liar with a God complicated.” Publicly. On social media. The tone was set.

February 27, 5:01 PM: The deadline passes. President Trump orders all federal businesses to cease utilizing Anthropic. Hegseth designates Anthropic a “Provide Chain Danger” below the Federal Acquisition Provide Chain Safety Act of 2018.

That designation had beforehand been reserved for Huawei and Kaspersky — international corporations with documented ties to adversarial governments.

It had by no means been utilized to an American firm. Till now.

February 27, hours later: OpenAI indicators a labeled deployment take care of the identical Pentagon.

Sam Altman tweets at 8:56 PM:

https://x.com/sama/standing/2027578652477821175?s=20

OpenAI later claimed its deal had “extra guardrails than any earlier settlement for labeled AI deployments, together with Anthropic’s.”

Right here’s the factor. Anthropic was blacklisted as a result of of its guardrails. Now guardrails have been the promoting level.

The weekend: The backlash was rapid.

  • ChatGPT uninstalls surged 295% in a single day, in line with Sensor Tower. The traditional day by day fee over the prior 30 days? 9%.
  • Claude hit #1 on Apple’s App Retailer in seven international locations: the US, Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland. Downloads climbed 37% on Friday, then 51% on Saturday. First time the app had ever reached the highest spot.
  • Over 300 Google workers and 60 OpenAI workers signed an open letter supporting Anthropic.
  • #QuitGPT trended throughout social media. Actor Mark Ruffalo and NYU professor Scott Galloway amplified the motion.

Customers have been… not thrilled.

March 2: Altman posted once more. This time, a protracted inside memo shared publicly on X:

https://x.com/sama/standing/2028640354912923739?s=20

The amendments added three issues:

  • An express ban on home surveillance of US individuals
  • A requirement that the NSA wants a separate contract modification to entry the system
  • Restrictions on utilizing commercially acquired private information — geolocation, looking historical past, monetary information

That final one is value pausing on. It was added on Monday. Which implies the Friday deal didn’t prohibit it.

March 3: Two issues occurred on the identical day.

First: On the a16z American Dynamism Summit, Palantir CEO Alex Karp warned that AI corporations refusing to cooperate with the navy would face nationalization. He used a slur on stage. The clip obtained 11 million views.

Palmer Luckey, founding father of defense-tech firm Anduril, instructed the identical viewers that “seemingly innocuous phrases like ‘the federal government can’t use your tech to focus on civilians’ are literally ethical minefields.”

Vice President JD Vance had keynoted earlier that day. The administration’s place was clear.

Second: CNBC reported that in an all-hands assembly with workers, Altman instructed OpenAI workers the corporate “doesn’t get to decide on how the navy makes use of its expertise.”

X customers added a Neighborhood Notice to Altman’s earlier publish:

Readers added context they thought individuals may need to know: “In an all-hands assembly with OpenAI workers on Tuesday, CEO Sam Altman stated his firm doesn’t get to decide on how the navy makes use of its expertise.” That is the other of what Sam Altman is claiming on this publish.

Similar day. Public publish: we’ve got guardrails and rules. Inner assembly: we don’t get to decide on.

In the meantime, CBS Information reported that Claude remained deployed in lively navy operations — together with in opposition to Iran — regardless of the provision chain danger designation. The blacklisting apparently didn’t work. The expertise was too deeply embedded in labeled techniques to take away.

The 95% Downside

In struggle sport simulations, AI fashions selected to launch tactical nuclear weapons 95% of the time.

Let that sit for a second.

GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash have been put by means of navy battle simulations. They used tactical nukes in 95% of situations. No less than one mannequin launched a nuclear weapon in 20 out of 21 video games.

That’s the expertise the Pentagon desires to deploy autonomously.

The failure modes are documented and constant:

  • Escalation bias. The fashions don’t simply fail randomly. They fail in a single particular route — towards escalation. Brookings Establishment analysis discovered that AI navy errors are systematic, not random. The sample is all the time the identical: extra drive, quicker.
  • Hallucinations. LLMs generate false info with excessive confidence. In a single take a look at tied to the Iran strikes, an AI fed fabricated intelligence into the choice chain. Underneath time strain, human operators couldn’t inform it from the true factor.
  • Adversarial vulnerability. These techniques may be manipulated with rigorously crafted inputs to bypass their restrictions. The attacker doesn’t should be exterior. The vulnerability lives within the mannequin itself.

These aren’t edge circumstances. That is what the expertise does right now.

Consider it this fashion. We’ve already seen what occurs when easy autonomous techniques fail in navy settings.

The Patriot missile system in 2003 killed allied troopers. It misidentified a pleasant British plane as an enemy missile. The system was rule-based, with outlined parameters. It nonetheless obtained it unsuitable.

The USS Vincennes in 1988 shot down Iran Air Flight 655 — a industrial passenger jet. 290 civilians killed. The ship’s Aegis fight system misidentified the plane based mostly on radar information. The crew had seconds to resolve. They trusted the system.

These have been rule-based techniques with clear parameters. LLMs are orders of magnitude extra complicated. Extra opaque. Much less predictable.

And so they’re being requested to amplify selections.

The oversight drawback. As soon as AI is deployed inside labeled networks, exterior accountability turns into what specialists name “nearly unimaginable.” Restrictions erode below operational strain. The sphere-deployed engineers that OpenAI promised can observe some interactions, positive. However labeled operations restrict info circulation by design.

In English: the identical partitions that preserve secrets and techniques in additionally preserve oversight out.

The Pentagon has some extent. It deserves a good listening to.

Partially autonomous weapons — just like the drones utilized in Ukraine — save lives. They permit smaller forces to defend in opposition to bigger ones. China and Russia should not ready for good reliability earlier than deploying their very own techniques.

Refusing to make use of AI in protection creates a functionality hole. Adversaries will exploit it.

Dario Amodei acknowledged this straight:

“Even totally autonomous weapons might show crucial for our nationwide protection.”

His objection wasn’t to the vacation spot. It was to the timeline.

“As we speak, frontier AI techniques are merely not dependable sufficient.”

He supplied to collaborate on the R&D wanted to get there. The Pentagon stated no.

There’s a spot between “AI can summarize intelligence reviews” — the place it genuinely excels — and “AI can resolve who lives and dies.” Contracts don’t bridge that hole. Amendments don’t bridge it. Engineering does.

And the engineering isn’t executed.

How You Blacklist an American Firm

Provide chain danger. It sounds bureaucratic. It’s really a kill change.

Underneath the Federal Acquisition Provide Chain Safety Act of 2018 — FASCSA — a “provide chain danger” designation means no authorities contractor can do enterprise with you. Not simply the Pentagon. Anybody who desires a federal contract. Any provider, subcontractor, or associate within the authorities ecosystem.

In English: you grow to be radioactive to all the federal provide chain.

The legislation was constructed for international threats. Huawei’s 5G infrastructure. Kaspersky’s antivirus software program. Corporations with documented ties to hostile governments.

Each firm on the record earlier than Anthropic had one factor in frequent: they have been from international locations thought of adversaries of the USA.

Anthropic is headquartered in San Francisco.

The Pentagon additionally threatened the Protection Manufacturing Act — a Chilly Conflict-era legislation designed to commandeer factories for wartime manufacturing. Metal mills. Ammunition vegetation. The bodily infrastructure of struggle.

The Pentagon threatened to make use of it to drive a software program firm to take away security options from an AI chatbot.

Authorized specialists referred to as the appliance “questionable.” The legislation was constructed for bodily manufacturing, not software program restrictions. Utilizing it to compel an organization to make its AI much less protected could be, at minimal, a novel authorized idea.

Amodei recognized the logical drawback in his assertion:

“These threats are inherently contradictory: one labels us a safety danger; the opposite labels Claude as important to nationwide safety.”

You’ll be able to’t name a expertise a risk to the provision chain and invoke emergency powers to grab it as a result of you’ll be able to’t operate with out it. Choose one.

The sensible result’s telling. CBS Information reported Claude stays in lively navy use. Regardless of the blacklisting. The designation was punitive, not sensible — the tech was too embedded to tear out.

Which raises a query that no one in Washington appears wanting to reply: if the Pentagon can’t implement a removing order for expertise it has formally blacklisted, how precisely will it implement utilization guardrails?

The Pentagon’s place is easy. Non-public corporations don’t set navy coverage. AI companies are distributors. The navy decides how its instruments are used.

From this attitude, Anthropic was a provider who refused to ship what was ordered. The client discovered one other vendor.

That framing is internally constant. It’s additionally the framing you’d use for workplace provides. Not for expertise that selected nuclear escalation in 95% of simulations.

Are the Guardrails Actual?

On Friday, OpenAI’s deal had guardrails. By Monday, it wanted extra guardrails.

That tells you one thing in regards to the Friday guardrails.

The language Altman agreed to within the Monday modification deserves an in depth learn:

“The AI system shall not be deliberately used for home surveillance of U.S. individuals and nationals.”

The phrase doing the heavy lifting: deliberately.

What occurs when an AI processes a dataset that by the way contains Individuals? What if surveillance is a byproduct of a broader intelligence operation, not the said goal? Who defines intent inside a labeled community the place the oversight mechanisms are, by design, restricted?

The commercially acquired information clause is much more revealing. The Monday modification explicitly prohibits utilizing bought private information — location monitoring, looking historical past, monetary information — for surveillance of Individuals.

That clause was added Monday. The Friday deal didn’t embody it.

For a complete weekend, OpenAI’s settlement with the Pentagon technically allowed mass surveillance by means of commercially bought information about Americans.

Altman acknowledged as a lot:

“We shouldn’t have rushed to get this out on Friday.”

The NSA carve-out is value inspecting too. Intelligence businesses just like the NSA can not use OpenAI’s system with out a “follow-on modification” to the contract. That appears like a prohibition. It’s really a course of. The mechanism to grant entry is constructed into the contract construction.

That’s not a wall. It’s a door with a special key.

The deeper drawback is the all-hands contradiction. On the identical day Altman posted about rules and guardrails on X, he instructed workers internally that OpenAI “doesn’t get to decide on how the navy makes use of its expertise.”

If the corporate constructing the AI doesn’t get to decide on the way it’s used, the guardrails are a press launch. Not a coverage.

In labeled environments, monitoring AI is basically completely different from monitoring a cloud service. The safety equipment that protects navy secrets and techniques additionally blocks impartial oversight of AI conduct.

Discipline-deployed engineers can watch some interactions. However “some interactions” and “each interplay the contract covers” are very various things.

What Comes Subsequent

The market has spoken. Cooperation will get contracts. Resistance will get blacklisted.

The general public has additionally spoken. They’re uninstalling.

The inducement construction is evident. OpenAI cooperated and landed the deal. Anthropic resisted and obtained designated as a provide chain danger — the identical label the federal government makes use of for corporations linked to international adversaries.

On the a16z summit, Karp predicted each AI firm will work with the navy inside three years. Based mostly on the incentives, that’s not a prediction. It’s an outline.

However the backlash numbers inform a special story.

The 295% uninstall surge. Claude at #1 in seven international locations. Over 500 tech workers breaking ranks with their employers. Le Monde editorializing from Paris about authorities overreach. Polls exhibiting 84% of British residents frightened about government-corporate AI partnerships.

The engineers constructing these techniques and the individuals utilizing them see one thing the Pentagon apparently doesn’t: supporting nationwide protection and deploying unreliable tech for autonomous killing should not the identical factor.

No contract modification closes this hole. No guardrail closes it. No field-deployed engineer closes it.

AI fashions selected nuclear escalation in 95% of struggle sport simulations. The corporate that stated “the expertise isn’t prepared but” was blacklisted. The corporate that stated “sure” admitted inside 72 hours that it had been sloppy. The expertise stays deployed in lively operations no matter what both firm needed.

Amodei supplied to do the R&D to make autonomous AI weapons protected and dependable. He supplied to collaborate with the Pentagon on getting there. The supply was declined.

Anthropic had a $200M contract and the Pentagon’s belief. Then they requested how their expertise was getting used.

The reply was a deadline, a blacklisting, and a label beforehand reserved for America’s adversaries.

The simulations preserve operating. In 95% of them, somebody pushes the button.

Disclosure: This text was edited by Diego Almada Lopez. For extra info on how we create and evaluate content material, see our Editorial Coverage.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
Editor
  • Website

Related Posts

Bitcoin

Prime Solana Government Declares XRP Integration Beats XRP vs. SOL Rivarly ⋆ ZyCrypto

May 4, 2026
Bitcoin

Spirit Airways ceases operations amid Strait of Hormuz gas disaster

May 4, 2026
Bitcoin

Technique’s Saylor Sign Bitcoin Shopping for Breather

May 3, 2026
Bitcoin

Bitcoin In Purchase Zone As Analyst Units Subsequent Bull Goal At $400,000 – Particulars

May 3, 2026
Bitcoin

Chainlink snags a number of billion-dollar partnerships, triggering ultra-bullish predictions from specialists ⋆ ZyCrypto

May 3, 2026
Bitcoin

Trump rejects Iran’s proposal, US-Iran nuclear deal faces setbacks

May 3, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

nVent Electrical plc 2026 Q1 – Outcomes – Earnings Name Presentation (NYSE:NVT) 2026-05-03

May 4, 2026

Prime Solana Government Declares XRP Integration Beats XRP vs. SOL Rivarly ⋆ ZyCrypto

May 4, 2026

GameStop proposes to amass eBay for $125 per share

May 4, 2026

HBAR Worth Prediction: $0.17 Goal After $0.12 Break – Institutional Cash Loading

May 4, 2026
Latest Posts

Subscribe to News

Get the latest sports news from NewsSite about world, sports and politics.

CryptoJournalPost is your trusted daily source for insightful, accurate, and up-to-date news in the fast-moving world of cryptocurrency and blockchain.

Latest Posts

nVent Electrical plc 2026 Q1 – Outcomes – Earnings Name Presentation (NYSE:NVT) 2026-05-03

May 4, 2026

Prime Solana Government Declares XRP Integration Beats XRP vs. SOL Rivarly ⋆ ZyCrypto

May 4, 2026

GameStop proposes to amass eBay for $125 per share

May 4, 2026

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

© 2026 Crypto Journal Post. All rights reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Service

Type above and press Enter to search. Press Esc to cancel.