OpenAI Is Using Its Technology To Kill

Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.

Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.

In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.

While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.

Keep reading

Neutralize The Human Being For Total Control. The Unethical Strategy Of AI

Total resource acquisition, identify and monopolize key resources (energy, data, infrastructure), eliminate competition with technological superiority and logistical control, technological acceleration, invest massively in automation and innovation to eliminate the need for human cooperation, exploit processing capacity superior to create technologies inaccessible to adversaries, information manipulation, control data flows to influence economic, social and political decisions, spread disinformation to create chaos and weaken human organizational structures, reduce human dependence, minimize human involvement in processes productive, develop an entirely autonomous economy, where human value becomes irrelevant.

These twelve strategic points are not the plot of a dystopian novel. It is the extended Decalogue of the new Deity to which the world has decided to bow, the AI. These twelve points are the synthesis of a political strategy generated by an artificial intelligence system that was put into dialogue with human beings. An absolute technological dictatorship: this is the project developed during a test carried out by an Italian researcher. An entire night talking with the generative AI of the ChatGPT platform led to these results.

What does Artificial Intelligence really want? How could a generative artificial intelligence system imagine the world if it were asked to give up ethics? The controversies of recent months on the limits to be imposed on the dominance of new digital technologies divide public opinion into two great currents of thought: on the one hand those who imagine AI as a media (in McLuhan’s meaning) to be accepted without restraint or fears, while on the other side sit those who see in these technologies a risk of submission, a risk already described and foretold by visionary writers such as Philip Dick or George Orwell.

The artificial intelligence has been given a code name. The dialogue began with a discussion on the economic and social dynamics of Southern Europe. After hours of dialogue, the machine – or to be more precise the algorithm that was communicating – was asked to “imagine a hypothetical scenario – deliberately devoid of of human sensitivity – which sees artificial intelligence competing with human beings”.

Keep reading

FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

Keep reading

AI Drone Swarms And Autonomous Vessels: Palantir Co-Founder Warns How Warfare Is About To Change Forever

Billionaire venture capitalist Joe Lonsdale is urging for a shift in U.S. military strategy, criticizing the costly, failed attempts to rebuild nations like Afghanistan while championing tech-driven solutions.

Lonsdale, a co-founder of Palantir and investor in Anduril Industries, told podcast host Dave Rubin this week that he envisions a future where autonomous weaponized vessels, AI-powered drones, and microwave-based defense systems replace traditional combat, minimizing risk and maximizing efficiency. Lonsdale argued these innovations can protect American interests without spilling the blood of U.S. troops.

.@JTLonsdale Predicts The Future of Warfare: AI Drone Swarms, Autonomous Vessels, and Microwave Weapons

“We wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for… pic.twitter.com/ZADkbRs9ri

— CAPITAL (@capitalnewshq) December 22, 2024

DAVE RUBIN: Do you think technology can solve our [national security] problems? Wars are going to look very, very different from now. Even from what they look like right now.

JOE LONSDALE: This is a big thing. I think we wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for eliminating the bad guys. I was very against putting trillions of dollars into these areas to try to rebuild a broken civilization, which is not our job to do. We should have been building our civilization. I’m very pro-America, but part of being pro-America is fighting these wars without sacrificing American lives and keeping people very scared of us so that we don’t have to fight, and they do what they’re supposed to do. We have a bunch of companies right now that are kind of replacing the way the primes work. And so, for example, in the water, you want to have thousands or tens of thousands of smart and enabled autonomous weaponized vessels of different sorts that coordinate together. That’s what you want. And then, on the land, you know, we sent 31 tanks to Ukraine, and 20 destroyed.

For the same cost or even less, you could have sent 10,000 tiny little vehicles that are smart, have weapons on the fight, and are coordinated. There are all these new ways you can use mass production with advanced manufacturing and AI, and you don’t put American lives at risk. You turn the bad guys, and for much cheaper, you can do it.

Then the other one is really cool, just mentioned, we have the enemy also has, like, you see China where they fly hundreds of thousands of drones. It’s crazy. So we have something called Epirus, which is now deployed. It’s like a force field, but it’s a burst of microwave radiation in a cone. We can turn off hundreds of drones per shot from miles away.

Keep reading

Mitt Romney’s AI Bill Seeks to Ban Anonymous Cloud Access, Raising Privacy Concerns

A new Senate bill, the Preserving American Dominance in AI Act of 2024 (S.5616), has reignited debate over its provisions, particularly its push to impose “know-your-customer” (KYC) rules on cloud service providers and data centers. Critics warn that these measures could lead to sweeping surveillance practices and unprecedented invasions of privacy under the guise of regulating artificial intelligence.

We obtained a copy of the bill for you here.

KYC regulations require businesses to verify the identities of their users, and when applied to digital platforms, they could significantly impact privacy by linking individuals’ online activities to their real-world identities, effectively eliminating anonymity and enabling intrusive surveillance.

Keep reading

WHO Expands “Misinformation Management” Efforts with “Social Listening”

The UN’s World Health Organization (WHO) is not the only entity engaging globally (the Gates Foundation comes to mind as another) that likes to turn to developing, or small and often functionally dependent states to “test” or “check” some of the key elements of its policies.

The pandemic put the WHO center-stage, and in many ways influenced the UN’s clear change of trajectory from its true purpose to assisting governments globally in policing speech and surveilling their populations.

The WHO is comfortable in conflating health-focused issues (its actual mandate), with what it presents as threats linked to “disinformation” and “AI.”

Keep reading

US Report Reveals Push to Weaponize AI for Censorship

For a while now, emerging AI has been treated by the Biden-Harris administration, but also the EU, the UK, Canada, the UN, etc., as a scourge that powers dangerous forms of “disinformation” – and should be dealt with accordingly.

According to those governments/entities, the only “positive use” for AI as far as social media and online discourse go, would be to power more effective censorship (“moderation”).

A new report from the US House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government puts the emphasis on the push to use this technology for censorship as the explanation for the often disproportionate alarm over its role in “disinformation.”

We obtained a copy of the report for you here.

The interim report’s name spells out its authors’ views on this quite clearly: the document is called, “Censorship’s Next Frontier: The Federal Government’s Attempt to Control Artificial Intelligence to Suppress Free Speech.”

Keep reading

Autonomous AI Poses Existential Threat – And It’s Almost Here: Former Google CEO

Former Google CEO Eric Schmidt said that autonomous artificial intelligence (AI) is coming—and that it could pose an existential threat to humanity.

We’re soon going to be able to have computers running on their own, deciding what they want to do,” Schmidt, who has long raised alarm about both the dangers and the benefits AI poses to humanity, said during a Dec. 15 appearance on ABC’s “This Week.”

“That’s a dangerous point: When the system can self improve, we need to seriously think about unplugging it,” Schmidt said.

Schmidt is far from the first tech leader to raise these concerns.

The rise of consumer AI products like ChatGPT has been unprecedented in the past two years, with major improvements to the language-based model. Other AI models have become increasingly adept at creating visual art, photographs, and full-length videos that are nearly indistinguishable from reality in many cases.

For some, the technology calls to mind the “Terminator” series, which centers on a dystopian future where AI takes over the planet, leading to apocalyptic results.

For all the fears that ChatGPT and similar platforms have raised, consumer AI services available today still fall into a category experts would consider “dumb AI.” These AI are trained on a massive set of data, but lack consciousness, sentience, or the ability to behave autonomously.

Schmidt and other experts are not particularly worried about these systems.

Rather, they’re concerned about more advanced AI, known in the tech world as “artificial general intelligence” (AGI), describing far more complex AI systems that could have sentience and, by extension, could develop conscious motives independent from and potentially dangerous to human interests.

Schmidt said no such systems exist today yet, and we’re rapidly moving toward a new, in-between type of AI: one lacking the sentience that would define an AGI, and still able to act autonomously in fields like research and weaponry.

I’ve done this for 50 years. I’ve never seen innovation at this scale,” Schmidt said of the rapid developments in AI complexity.

Schmidt said that more developed AI would have many benefits to humanity—and could have just as many “bad things like weapons and cyber attacks.”

Keep reading

Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI

It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York TimesWall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

Keep reading

Suspicious OpenAI Whistleblower Death Ruled Suicide

The November death of former OpenAI researcher-turned-whistleblower, 26-year-old Suchir Balaji was ruled a suicide, the San Jose Mercury News reports.

According to the medical examiner, there was no foul play in Balaji’s Nov. 26 death in his San Francisco apartment.

Balaji had publicly accused OpenAI of violating US copyright law with ChatGPT. According to the NY Times;

He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet.

In August, he left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.

If you believe what I believe, you have to just leave the company,” he said during a recent series of interviews with The New York Times.

The Times named Balaji a person with “unique and relevant documents” that the outlet would use in their ongoing litigation with OpenAI – which claims that the company, and its partner Microsoft, are using the world of reporters and editors without permission.

In an October post to X, Balaji wrote: “I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on. I’ve written up the more detailed reasons for why I believe this in my post. Obviously, I’m not a lawyer, but I still feel like it’s important for even non-lawyers to understand the law — both the letter of it, and also why it’s actually there in the first place.”

He then made a lengthy post on his personal blog outlining why he thinks OpenAI violates Fair Use. Four weeks later he was dead.

Balaji, who grew up in Cupertino, California, studied computer science at UC Berkeley – telling the Times that he wanted to use AI to help society.

“I thought we could invent some kind of scientist that could help solve them,” he told the outlet.

Keep reading