US Government Announces ChatGPT Integration Across Agencies

US President Donald Trump’s administration has signed a deal with OpenAI to provide the enterprise-level version of the ChatGPT platform to all federal agencies in an effort to “modernize” operations.

Under the deal, all US government agencies will have access to the AI platform for $1 per agency to facilitate integration of AI into workflow operations, according to a Wednesday announcement from the US General Services Administration (GSA).

The GSA, which is the US government’s procurement office, said the private-public partnership “directly supports” the White House’s AI Action Plan, a three-pillar strategy to establish US leadership in AI development recently disclosed by the admnistration.

Despite the modernization efforts, critics say the ongoing AI race between nation-states may have negative implications for privacy, data protection policies, censorship, narrative control, cybersecurity, civil liberties and governance.

The dangers of centralized AI in government

In 2023, the US Space Force, a branch of the military, paused the use of generative AI tools at the organization, including ChatGPT, due to cybersecurity concerns over sensitive data critical to national security.

Large-language models (LLMs) and AI service providers would need to overhaul their data protection standards before the tools can be widely adopted by the military, said Lisa Costa, Space Force’s deputy chief of space operations for technology and innovation at the time.

Public concerns over automating government work with AI have also grown as the AI industry develops. 

Sweden Prime Minister Ulf Kristersson recently came under fire after acknowledging that he had consulted AI for policy decisions, according to The Guardian.

Tom Samuelsson, a spokesperson for the prime minister, said Kristersson did not use AI to discuss classified matters or sensitive information related to national security.

Keep reading

Judge Strikes Down California Deepfake Censorship Law

California’s attempt to regulate political speech on major social media platforms has been blocked in federal court, with a judge ruling the state’s latest “deepfake” law clashes with protections already established by Congress.

Assembly Bill 2655 attempted to compel certain large platforms to track down and delete “materially deceptive content” about candidates, election officials, and officeholders.

Supporters described it as a safeguard against manipulated media. The companies targeted, including X and Rumble, argued it was an attempt to turn them into agents of government censorship.

Senior US District Judge John Mendez sided with the platforms and did not even need to reach the argument of constitutional free speech questions to strike down the measure.

He found the federal Communications Decency Act [CDA] already shields online services from punishment over third-party content.

“No parts of this statute are severable because the whole statute is preempted,” Mendez said in court. “No parts of A.B. 2655 can be salvaged.”

The ruling applies to the companies in the lawsuit, and his earlier order freezing enforcement of the law remains in effect statewide until he issues a formal opinion.

For Mendez, the law punished companies for doing something they are “clearly protected by [the CDA] from doing.”

The court also cast doubt on another state law, Assembly Bill 2839, which prohibits false or misleading digital communications aimed at election workers, officials, voting equipment, or candidates in the months leading up to an election. That measure is also on hold, and Mendez signaled he doubts it will survive judicial review.

“Anybody can sue,” he said. “I can sue. If I see the video, under this law, I can sue.” He warned that such a rule chills protected speech and noted the state had not shown it was using the least speech-restrictive approach possible.

Keep reading

Former CNN Host Jim Acosta Interviews AI Generated Version of Dead Teen to Push Gun Control

When it comes to journalistic ethics, former CNN host Jim Acosta clearly has none.

Acosta recently interviewed one of the victims of the Parkland shooting. Not a real one, mind you, but an AI generated version of one of the victims who died in the shooting.

Acosta actually spoke to this non-real person as if it was real and used the AI generation to push gun control.

This is just sick.

From Townhall:

I don’t know what this is, but it’s unseemly, insensitive, and creepy. The interview should never have been done, and whoever or whichever group of people thought this was a good idea, creating AI chatbots for those killed during the Parkland shooting, are ghouls.

In 2018, 17 students were murdered during the attack at Marjory Stoneman Douglas High School. Joaquin Oliver, 17, was one of those people, and he “spoke” with former CNN host Jim Acosta in a surreal “interview,” which inevitably led to a push for more gun control…

Acosta: “I would like to know what your solution would be for gun violence.”

AI Joaquin Oliver: “Great question. I believe in a mix of stronger gun control laws, mental health support, and community engagement.”

“We need to create safe spaces for conversations and connections, making sure everyone feels seen and heard. It’s about building a culture of kindness and understanding. What do you think about that?”

Acosta: “I think that’s a great idea, Joaquin.”

Keep reading

The rise of Silicon Valley’s techno-religion

In downtown Berkeley, an old hotel has become a temple to the pursuit of artificial intelligence (AI) and the future of humanity. Its name is Lighthaven.

Covering much of a city block, this gated complex includes five buildings and a small park dotted with rose bushes, stone fountains and neoclassical statues.

Stained-glass windows glisten on the top floor of the tallest building, called Bayes House after an 18th-century mathematician and philosopher.

Lighthaven is the de facto headquarters of a group of people who call themselves Rationalists.

This group has many interests involving mathematics, genetics and philosophy.

One of their overriding beliefs is that AI can deliver a better life – if it does not destroy humanity first.

The Rationalists believe it is up to the people building AI to ensure that it is a force for the greater good.

They were talking about AI risks years before OpenAI created ChatGPT, which brought AI into the mainstream and turned Silicon Valley on its head.

Their influence has quietly spread through many tech companies, from industry giants like Google to AI pioneers like OpenAI and Anthropic.

Many of the AI world’s biggest names – including Dr Shane Legg, co-founder of Google’s DeepMind; Anthropic’s chief executive, Dr Dario Amodei; and Dr Paul Christiano, a former OpenAI researcher who now leads safety work at the US Centre for AI Standards and Innovation – have been influenced by Rationalist philosophy. Mr Elon Musk, who runs his own AI company, said that many of the community’s ideas align with his own.

Mr Musk met his former partner, pop star Grimes, after they made the same cheeky reference to a Rationalist belief called Roko’s Basilisk.

This elaborate thought experiment argues that when an all-powerful AI arrives, it will punish everyone who has not done everything they can to bring it into existence.

But these tech industry leaders stop short of calling themselves Rationalists, often because that label has over the years invited ridicule.

The Rationalist community is tightly entwined with the Effective Altruism movement, which aims to remake philanthropy by calculating how many people would benefit from each donation. This form of utilitarianism aims to benefit not just people who are alive today, but also all the people who will ever live.

Many Effective Altruists (EA) have decided that the best way to benefit humanity is to protect it from destruction by AI.

Keep reading

Ukraine Claims Drone Strike On 5 Russian Fighter Jets At Crimean Base

Ukraine’s Security Service (SBU) announced it conducted a successful drone assault on a Russian airbase in Saky, a city in Crimea, in the overnight hours. This comes after weekend attacks deep inside Russia which hit oil refineries and a military-linked electronics factory.

The SBU claimed direct hits on five Russian military aircraft at the base, which reportedly destroyed one Su-30SM fighter jet and damaged another. Ukraine also said its drones hit three additional Su-24 bombers, along with an attack on a depot storing aviation munitions.

The Saky airfield has long been a key hub for Russian military activity in the Black Sea region. The SBU hailed the strike as “significant” – given that a single Su-30SM jet is worth up to around $50 million.

“The successful special operation carried out by the SBU in Saky marks another step toward weakening the enemy’s capacity to wage its war of aggression against Ukraine,” the SBU said.

However, Russia has yet to confirm or deny the Crimea airbase strike, or the jet losses – and is not likely to. If accurate, the aircraft destruction would mark the biggest such assault since Operation Spiderweb on June 1st, which targeted four airbases deep inside Russian territory.

Ukraine claimed it took out 41 Russian military aircraft in that operation, but Moscow consistently said this figure was exaggerated – but never disclosed a precise damage assessment.

The Su-30SM is a two-seat, twin-engine multirole fighter aircraft introduced by Russia early last decade, while the Su-24 is a twin-engine, supersonic strike aircraft developed by the Soviet Union – both of which are frequently used for combat missions in Ukraine.

Soon after Operation Spider’s Web, Ukraine’s military vowed there would be more such attacks to come. Russia has of late been pummeling Ukrainian cities with drone and missile strikes, so it appears Kiev is mounting more and more cross-border revenge attacks over the last several days.

Keep reading

‘Relatively Simple’ AI Trading Bots Naturally Collude To Rig Markets: Wharton

In what should come as a surprise to nobody, ‘relatively simple’ AI bots set loose in simulations designed to mimic real-world stock and bond exchanges don’t just compete for returns; they collude to fix prices, hoard profits, and box out human tradersaccording to a trio of researchers from Wharton and Hong Kong University of Science & Technology. 

As Bloomberg reports;

In simulations designed to mimic real-world markets, trading agents powered by artificial intelligence formed price-fixing cartels — without explicit instruction. Even with relatively simple programming, the bots chose to collude when left to their own devices, raising fresh alarms for market watchdogs.

Put another way, AI bots don’t need to be evil — or even particularly smart — to rig the market. Left alone, they’ll learn it themselves. 

According to Itay Goldstein, one of the researchers and a finance professor at the Wharton School of University of Pennsylvania, “You can get these fairly simple-minded AI algorithms to collude … It looks very pervasive, either when the market is very noisy or when the market is not noisy.”

The phenomenon suggests that AI agents pose a challenge that regulators have yet to confront – with the trio’s research having already drawn attention from both regulators and asset managers. They have been invited to preset their findings at a seminar, while some quant firms – unnamed by Winston Dou (Goldstein’s Wharton colleague) – expressing interest in clear regulatory guidelines and rules for AI-powered algorithmic trading execution.

“They worry that it’s not their intention,” said Dou. “But regulators can come to them and say: ‘You’re doing something wrong.’”

Keep reading

Ukraine’s Anti-Graft Agencies Say They Uncovered Major Drone Procurement Bribery Case

Ukraine’s anti-corruption agencies said on Aug. 2 they had uncovered a large-scale bribery scheme involving the procurement of military drones and electronic warfare equipment.

This came just days after the agencies’ independence was restored following mass protests over government efforts to curb their powers.

The National Anti-Corruption Bureau of Ukraine (NABU) and the Specialized Anti-Corruption Prosecutor’s Office (SAPO) said in a statement on Saturday that the alleged plot involved a sitting lawmaker, current and former regional officials, National Guard personnel, and a company executive.

Investigators alleged that those involved had refined a scheme over the past two years to systematically siphon off budget funds allocated by local authorities for defense needs, and to secure “unfair benefits in particularly large quantities.”

Ukrainian media outlet Ukrainska Pravda, citing unnamed law enforcement sources, identified the accused lawmaker as Oleksii Kuznetsov of President Volodymyr Zelenskyy’s Servant of the People party.

Party leader Davyd Arakhamiia later said on Telegram that Kuznetsov’s membership in the parliamentary faction would be suspended while the investigation is underway, and that a disciplinary panel was weighing his expulsion.

The Epoch Times has reached out to the Ukrainian parliament’s press office with a request to forward a comment request to Kuznetsov.

In a separate statement, the party said it supported the work of NABU and SAPO, and stressed that responsibility for corrupt acts “must be borne by everyone, regardless of position, status or political affiliation.”

It said the recently passed law restoring the agencies’ independence “created additional guarantees for their work,” for them to act decisively and professionally in combating corruption.

Zelenskyy said in a statement posted on X on Saturday that he had been briefed on the investigation by the heads of NABU and SAPO, confirming that a lawmaker, local officials, and several National Guard servicemembers had been “exposed for bribery.”

“I am grateful to the anti-corruption agencies for their work,” he wrote. “There can only be zero tolerance for corruption, clear teamwork in uncovering it, and ultimately, a fair sentence. It is important that anti-corruption institutions operate independently, and the law passed on Thursday guarantees them all the tools necessary for a real fight against corruption.”

The law he referred to marked a dramatic about-face for his administration.

Keep reading

Israeli spyware firms are fueling the global surveillance state

Last week another batch of peaceful pro-Palestine protestors were arrested by British police on suspicion of terrorism offenses, including a disabled man in a wheelchair, as the UK continues its descent into authoritarianism on behalf of Israel.

If any of these protestors had their phones on them at the time of arrest, the police will most likely have scraped them for data using sophisticated spy tech software. Protestors not arrested will have been caught on mobile cameras that sit atop police vans in the UK, and their faces, perhaps even their voices, will have been captured, analyzed and cross referenced against a police database.

And in a perverse twist, this spyware technology – technology which now underpins the insidious and growing capabilities of the modern surveillance state – will most likely have been made in Israel by Israeli spies.

But it’s not just in the UK.

Spy tech developed by former Israeli spies is being used on an industrial scale by various agencies in western democracies, from police forces to national security agencies to militaries. Some has been declared illegal, some skirts legal boundaries, and much remains hidden.

The scale of usage, and the range of capabilities provided by this Israeli spy tech, is vast. From face and voice recognition software, to interception and wiretap technology, to covert location tracking, to forced data extraction from smartphones and other devices.

The tech, built by software engineers who cut their teeth writing code to enable and enforce Israeli domination over, and apartheid against Palestinians, is being sold to security services, police forces and immigration agencies across the West.

While much of the information in this article isn’t new, it hasn’t been summarized in one place before. The implications for global civil liberties of Israel’s dominance in spy tech have also not been articulated, and past media coverage has sometimes omitted the Israeli link to these companies. This article will outline the primary players, the sellers and the buyers, and also identify recent contracts, previously undocumented, between Israeli spytech and Western buyers.

Keep reading

Palantir Secures Historic $10 Billion Army Contract for AI-Driven Defense

The U.S. Army has awarded Palantir Technologies a monumental $10 billion contract, consolidating dozens of existing agreements into a single enterprise deal over the next decade.

This landmark agreement, announced on July 31, 2025, positions Palantir as a cornerstone of the Army’s data and software infrastructure. It underscores a strategic shift toward leveraging commercial AI to enhance military readiness and efficiency.

The contract streamlines 75 separate agreements, offering volume-based discounts and eliminating redundant procurement costs.

This approach maximizes buying power while delivering cutting-edge data integration and AI tools to soldiers faster. The deal reflects a broader Pentagon push to modernize warfare capabilities amid rising global tensions, from Ukraine to the Indo-Pacific.

Palantir’s role builds on its success with the Maven Smart System, which received a $795 million boost earlier this year to expand AI-driven targeting across U.S. forces.

The system fuses intelligence from drones, satellites, and sensors to identify threats in near real-time, maintaining human oversight for critical decisions.

This capability has proven vital in conflicts like Ukraine, where rapid data analysis drives battlefield outcomes.

Founded by Peter Thiel and Alex Karp, Palantir has deepened its federal footprint, securing $373 million in U.S. government revenue in Q1 2025 alone, a 45% increase year-over-year.

The Trump administration’s emphasis on cost efficiency and commercial partnerships has propelled Palantir’s rise, with new contracts spanning the Navy, ICE, and CDC.

Critics, however, warn that such dominance by a single vendor could stifle competition and innovation.

The Army’s enterprise agreement not only enhances operational efficiency but also aligns with President Trump’s vision of a leaner, tech-driven military.

By consolidating contracts, the Army projects significant savings, freeing resources for mission-critical programs.

Palantir’s software, like the Foundry platform, enables seamless data integration, empowering soldiers with actionable intelligence.

Keep reading

Russian Drone Found At Military Base In Lithuania Which Hosts NATO Troops

A suspected errant Russian drone has breached a NATO country’s airspace this week – though certainly not for the first time, and is setting off the proverbial alarm within Lithuania’s military

The drone was discovered crashed at a Lithuanian military training area after entering the country’s airspace from Belarus, after first being spotted early Monday morning, Lithuania’s army reported on Friday. The military first tracked in on radar over Belarusian airspace, near the border.

Several area residents actually captured footage of it flying over Vilnius before it vanished, and later was found on the ground at the military training base.

“It’s likely the same drone that breached our airspace on Monday. Initial analysis suggests it may be a Gerbera model, though this is still being confirmed,” the Lithuanian military said in the statement.

At least one Lithuanian lawmaker has portrayed the drone breach incident as intentional, also given it ended up at a military site.

“This looks like a provocation,” said Mindaugas Sinkevičius, interim leader of Lithuania’s ruling Social Democratic Party, while describing that the sensitive area where it was found leads to the conclusion that the breach was on purpose or a test of sorts.

It has been identified as a Gerbera drone, a type which Russia’s military often utilizes as a decoy to mislead or distract air defense systems in Ukraine.

Interestingly, the Gaižiūnai training grounds near Rukla – where it was recovered – actually hosts a NATO multinational battalion, regional reports say.

Keep reading