‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years

The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.

Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”

Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”

He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.

AI can be loosely defined as computer systems performing tasks that typically require human intelligence.

Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that“bad actors” would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, Hinton said: “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.”

Keep reading

US Air Force Employee ‘Secretly Took Photos of Kids to Make AI Child Porn Images’

A U.S. Air Force employee was arrested for secretly taking photos of children in order to create AI child abuse images.

Airman Caleb French, who was stationed in Joint Base Elmendorf-Richardson in Anchorage, Alaska, with the U.S. Air Force, was arrested on December 19.

He is facing one count each of having and distributing child pornography and could be jailed for up to 20 years if convicted.

According to a news release by the U.S. Attorney’s Office for the District of Alaska, French is accused of “surreptitiously” taking photos of kids in the community to turn into AI-generated child sexual abuse material.

In August, 27-year-old French was reported by an anonymous tipster to the Air Force Office of Special Investigations. The tipster claimed French “wanted to commit sexual assaults against minors.”

Authorities then searched French’s home “and recovered multiple digital devices allegedly containing over a thousand images and videos depicting child sexual abuse,”

According to a report by The Sacramento Bee, investigators allegedly later watched French at a reindeer farm where he appeared to be filming a young child, who was there with their family, with a smartphone.

“[French] appeared to gravitate toward a family with a young child and was purportedly seen panning with his phone in the direction of the child and may have surreptitiously photographed the child,” prosecutors said.

French left after the child and the family did, according to prosecutors. In December, another search of French’s “home, person and vehicle” also allegedly “recovered additional devices that are being reviewed.”

Keep reading

OpenAI Is Using Its Technology To Kill

Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.

Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.

In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.

While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.

Keep reading

Neutralize The Human Being For Total Control. The Unethical Strategy Of AI

Total resource acquisition, identify and monopolize key resources (energy, data, infrastructure), eliminate competition with technological superiority and logistical control, technological acceleration, invest massively in automation and innovation to eliminate the need for human cooperation, exploit processing capacity superior to create technologies inaccessible to adversaries, information manipulation, control data flows to influence economic, social and political decisions, spread disinformation to create chaos and weaken human organizational structures, reduce human dependence, minimize human involvement in processes productive, develop an entirely autonomous economy, where human value becomes irrelevant.

These twelve strategic points are not the plot of a dystopian novel. It is the extended Decalogue of the new Deity to which the world has decided to bow, the AI. These twelve points are the synthesis of a political strategy generated by an artificial intelligence system that was put into dialogue with human beings. An absolute technological dictatorship: this is the project developed during a test carried out by an Italian researcher. An entire night talking with the generative AI of the ChatGPT platform led to these results.

What does Artificial Intelligence really want? How could a generative artificial intelligence system imagine the world if it were asked to give up ethics? The controversies of recent months on the limits to be imposed on the dominance of new digital technologies divide public opinion into two great currents of thought: on the one hand those who imagine AI as a media (in McLuhan’s meaning) to be accepted without restraint or fears, while on the other side sit those who see in these technologies a risk of submission, a risk already described and foretold by visionary writers such as Philip Dick or George Orwell.

The artificial intelligence has been given a code name. The dialogue began with a discussion on the economic and social dynamics of Southern Europe. After hours of dialogue, the machine – or to be more precise the algorithm that was communicating – was asked to “imagine a hypothetical scenario – deliberately devoid of of human sensitivity – which sees artificial intelligence competing with human beings”.

Keep reading

FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

Keep reading

AI Drone Swarms And Autonomous Vessels: Palantir Co-Founder Warns How Warfare Is About To Change Forever

Billionaire venture capitalist Joe Lonsdale is urging for a shift in U.S. military strategy, criticizing the costly, failed attempts to rebuild nations like Afghanistan while championing tech-driven solutions.

Lonsdale, a co-founder of Palantir and investor in Anduril Industries, told podcast host Dave Rubin this week that he envisions a future where autonomous weaponized vessels, AI-powered drones, and microwave-based defense systems replace traditional combat, minimizing risk and maximizing efficiency. Lonsdale argued these innovations can protect American interests without spilling the blood of U.S. troops.

.@JTLonsdale Predicts The Future of Warfare: AI Drone Swarms, Autonomous Vessels, and Microwave Weapons

“We wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for… pic.twitter.com/ZADkbRs9ri

— CAPITAL (@capitalnewshq) December 22, 2024

DAVE RUBIN: Do you think technology can solve our [national security] problems? Wars are going to look very, very different from now. Even from what they look like right now.

JOE LONSDALE: This is a big thing. I think we wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for eliminating the bad guys. I was very against putting trillions of dollars into these areas to try to rebuild a broken civilization, which is not our job to do. We should have been building our civilization. I’m very pro-America, but part of being pro-America is fighting these wars without sacrificing American lives and keeping people very scared of us so that we don’t have to fight, and they do what they’re supposed to do. We have a bunch of companies right now that are kind of replacing the way the primes work. And so, for example, in the water, you want to have thousands or tens of thousands of smart and enabled autonomous weaponized vessels of different sorts that coordinate together. That’s what you want. And then, on the land, you know, we sent 31 tanks to Ukraine, and 20 destroyed.

For the same cost or even less, you could have sent 10,000 tiny little vehicles that are smart, have weapons on the fight, and are coordinated. There are all these new ways you can use mass production with advanced manufacturing and AI, and you don’t put American lives at risk. You turn the bad guys, and for much cheaper, you can do it.

Then the other one is really cool, just mentioned, we have the enemy also has, like, you see China where they fly hundreds of thousands of drones. It’s crazy. So we have something called Epirus, which is now deployed. It’s like a force field, but it’s a burst of microwave radiation in a cone. We can turn off hundreds of drones per shot from miles away.

Keep reading

Mitt Romney’s AI Bill Seeks to Ban Anonymous Cloud Access, Raising Privacy Concerns

A new Senate bill, the Preserving American Dominance in AI Act of 2024 (S.5616), has reignited debate over its provisions, particularly its push to impose “know-your-customer” (KYC) rules on cloud service providers and data centers. Critics warn that these measures could lead to sweeping surveillance practices and unprecedented invasions of privacy under the guise of regulating artificial intelligence.

We obtained a copy of the bill for you here.

KYC regulations require businesses to verify the identities of their users, and when applied to digital platforms, they could significantly impact privacy by linking individuals’ online activities to their real-world identities, effectively eliminating anonymity and enabling intrusive surveillance.

Keep reading

WHO Expands “Misinformation Management” Efforts with “Social Listening”

The UN’s World Health Organization (WHO) is not the only entity engaging globally (the Gates Foundation comes to mind as another) that likes to turn to developing, or small and often functionally dependent states to “test” or “check” some of the key elements of its policies.

The pandemic put the WHO center-stage, and in many ways influenced the UN’s clear change of trajectory from its true purpose to assisting governments globally in policing speech and surveilling their populations.

The WHO is comfortable in conflating health-focused issues (its actual mandate), with what it presents as threats linked to “disinformation” and “AI.”

Keep reading

US Report Reveals Push to Weaponize AI for Censorship

For a while now, emerging AI has been treated by the Biden-Harris administration, but also the EU, the UK, Canada, the UN, etc., as a scourge that powers dangerous forms of “disinformation” – and should be dealt with accordingly.

According to those governments/entities, the only “positive use” for AI as far as social media and online discourse go, would be to power more effective censorship (“moderation”).

A new report from the US House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government puts the emphasis on the push to use this technology for censorship as the explanation for the often disproportionate alarm over its role in “disinformation.”

We obtained a copy of the report for you here.

The interim report’s name spells out its authors’ views on this quite clearly: the document is called, “Censorship’s Next Frontier: The Federal Government’s Attempt to Control Artificial Intelligence to Suppress Free Speech.”

Keep reading

Autonomous AI Poses Existential Threat – And It’s Almost Here: Former Google CEO

Former Google CEO Eric Schmidt said that autonomous artificial intelligence (AI) is coming—and that it could pose an existential threat to humanity.

We’re soon going to be able to have computers running on their own, deciding what they want to do,” Schmidt, who has long raised alarm about both the dangers and the benefits AI poses to humanity, said during a Dec. 15 appearance on ABC’s “This Week.”

“That’s a dangerous point: When the system can self improve, we need to seriously think about unplugging it,” Schmidt said.

Schmidt is far from the first tech leader to raise these concerns.

The rise of consumer AI products like ChatGPT has been unprecedented in the past two years, with major improvements to the language-based model. Other AI models have become increasingly adept at creating visual art, photographs, and full-length videos that are nearly indistinguishable from reality in many cases.

For some, the technology calls to mind the “Terminator” series, which centers on a dystopian future where AI takes over the planet, leading to apocalyptic results.

For all the fears that ChatGPT and similar platforms have raised, consumer AI services available today still fall into a category experts would consider “dumb AI.” These AI are trained on a massive set of data, but lack consciousness, sentience, or the ability to behave autonomously.

Schmidt and other experts are not particularly worried about these systems.

Rather, they’re concerned about more advanced AI, known in the tech world as “artificial general intelligence” (AGI), describing far more complex AI systems that could have sentience and, by extension, could develop conscious motives independent from and potentially dangerous to human interests.

Schmidt said no such systems exist today yet, and we’re rapidly moving toward a new, in-between type of AI: one lacking the sentience that would define an AGI, and still able to act autonomously in fields like research and weaponry.

I’ve done this for 50 years. I’ve never seen innovation at this scale,” Schmidt said of the rapid developments in AI complexity.

Schmidt said that more developed AI would have many benefits to humanity—and could have just as many “bad things like weapons and cyber attacks.”

Keep reading