Mitt Romney’s AI Bill Seeks to Ban Anonymous Cloud Access, Raising Privacy Concerns

A new Senate bill, the Preserving American Dominance in AI Act of 2024 (S.5616), has reignited debate over its provisions, particularly its push to impose “know-your-customer” (KYC) rules on cloud service providers and data centers. Critics warn that these measures could lead to sweeping surveillance practices and unprecedented invasions of privacy under the guise of regulating artificial intelligence.

We obtained a copy of the bill for you here.

KYC regulations require businesses to verify the identities of their users, and when applied to digital platforms, they could significantly impact privacy by linking individuals’ online activities to their real-world identities, effectively eliminating anonymity and enabling intrusive surveillance.

Keep reading

WHO Expands “Misinformation Management” Efforts with “Social Listening”

The UN’s World Health Organization (WHO) is not the only entity engaging globally (the Gates Foundation comes to mind as another) that likes to turn to developing, or small and often functionally dependent states to “test” or “check” some of the key elements of its policies.

The pandemic put the WHO center-stage, and in many ways influenced the UN’s clear change of trajectory from its true purpose to assisting governments globally in policing speech and surveilling their populations.

The WHO is comfortable in conflating health-focused issues (its actual mandate), with what it presents as threats linked to “disinformation” and “AI.”

Keep reading

US Report Reveals Push to Weaponize AI for Censorship

For a while now, emerging AI has been treated by the Biden-Harris administration, but also the EU, the UK, Canada, the UN, etc., as a scourge that powers dangerous forms of “disinformation” – and should be dealt with accordingly.

According to those governments/entities, the only “positive use” for AI as far as social media and online discourse go, would be to power more effective censorship (“moderation”).

A new report from the US House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government puts the emphasis on the push to use this technology for censorship as the explanation for the often disproportionate alarm over its role in “disinformation.”

We obtained a copy of the report for you here.

The interim report’s name spells out its authors’ views on this quite clearly: the document is called, “Censorship’s Next Frontier: The Federal Government’s Attempt to Control Artificial Intelligence to Suppress Free Speech.”

Keep reading

Autonomous AI Poses Existential Threat – And It’s Almost Here: Former Google CEO

Former Google CEO Eric Schmidt said that autonomous artificial intelligence (AI) is coming—and that it could pose an existential threat to humanity.

We’re soon going to be able to have computers running on their own, deciding what they want to do,” Schmidt, who has long raised alarm about both the dangers and the benefits AI poses to humanity, said during a Dec. 15 appearance on ABC’s “This Week.”

“That’s a dangerous point: When the system can self improve, we need to seriously think about unplugging it,” Schmidt said.

Schmidt is far from the first tech leader to raise these concerns.

The rise of consumer AI products like ChatGPT has been unprecedented in the past two years, with major improvements to the language-based model. Other AI models have become increasingly adept at creating visual art, photographs, and full-length videos that are nearly indistinguishable from reality in many cases.

For some, the technology calls to mind the “Terminator” series, which centers on a dystopian future where AI takes over the planet, leading to apocalyptic results.

For all the fears that ChatGPT and similar platforms have raised, consumer AI services available today still fall into a category experts would consider “dumb AI.” These AI are trained on a massive set of data, but lack consciousness, sentience, or the ability to behave autonomously.

Schmidt and other experts are not particularly worried about these systems.

Rather, they’re concerned about more advanced AI, known in the tech world as “artificial general intelligence” (AGI), describing far more complex AI systems that could have sentience and, by extension, could develop conscious motives independent from and potentially dangerous to human interests.

Schmidt said no such systems exist today yet, and we’re rapidly moving toward a new, in-between type of AI: one lacking the sentience that would define an AGI, and still able to act autonomously in fields like research and weaponry.

I’ve done this for 50 years. I’ve never seen innovation at this scale,” Schmidt said of the rapid developments in AI complexity.

Schmidt said that more developed AI would have many benefits to humanity—and could have just as many “bad things like weapons and cyber attacks.”

Keep reading

Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI

It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York TimesWall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

Keep reading

Suspicious OpenAI Whistleblower Death Ruled Suicide

The November death of former OpenAI researcher-turned-whistleblower, 26-year-old Suchir Balaji was ruled a suicide, the San Jose Mercury News reports.

According to the medical examiner, there was no foul play in Balaji’s Nov. 26 death in his San Francisco apartment.

Balaji had publicly accused OpenAI of violating US copyright law with ChatGPT. According to the NY Times;

He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet.

In August, he left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.

If you believe what I believe, you have to just leave the company,” he said during a recent series of interviews with The New York Times.

The Times named Balaji a person with “unique and relevant documents” that the outlet would use in their ongoing litigation with OpenAI – which claims that the company, and its partner Microsoft, are using the world of reporters and editors without permission.

In an October post to X, Balaji wrote: “I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on. I’ve written up the more detailed reasons for why I believe this in my post. Obviously, I’m not a lawyer, but I still feel like it’s important for even non-lawyers to understand the law — both the letter of it, and also why it’s actually there in the first place.”

He then made a lengthy post on his personal blog outlining why he thinks OpenAI violates Fair Use. Four weeks later he was dead.

Balaji, who grew up in Cupertino, California, studied computer science at UC Berkeley – telling the Times that he wanted to use AI to help society.

“I thought we could invent some kind of scientist that could help solve them,” he told the outlet.

Keep reading

OpenAI whistleblower found dead in San Francisco apartment

A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week.

Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said.

The medical examiner’s office has not released his cause of death, but police officials this week said there is “currently, no evidence of foul play.”

Information he held was expected to play a key part in lawsuits against the San Francisco-based company.

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.

The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.

In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.

“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”

Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.

But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.

The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.

No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”

Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.

Keep reading

Data Centers Are Sending Global Electricity Demand Soaring

The global electricity demand is expected to grow exponentially in the coming decades, largely due to an increased demand from tech companies for new data centers to support the rollout of high-energy-consuming advanced technologies, such as artificial intelligence (AI). As governments worldwide introduce new climate policies and pump billions into alternative energy sources and clean tech, these efforts may be quashed by the increased electricity demand from data centers unless greater international regulatory action is taken to ensure that tech companies invest in clean energy sources and do not use fossil fuels for power.

The International Energy Agency (IEA) released a report in October entitled “What the data centre and AI boom could mean for the energy sector”. It showed that with investment in new data centers surging over the past two years, particularly in the U.S., the electricity demand is increasing rapidly – a trend that is set to continue. 

The report states that in the U.S., annual investment in data center construction has doubled in the past two years alone. China and the European Union are also seeing investment in data centers increase rapidly. In 2023, the overall capital investment by tech leaders Google, Microsoft, and Amazon was greater than that of the U.S. oil and gas industry, at approximately 0.5 percent of the U.S. GDP.

The tech sector expects to deploy AI technologies more widely in the coming decades as the technology is improved and becomes more ingrained in everyday life. This is just one of several advanced technologies expected to contribute to the rise in demand for power worldwide in the coming decades. 

Global aggregate electricity demand is set to increase by 6,750 terawatt-hours (TWh) by 2030, per the IEA’s Stated Policies Scenario. This is spurred by several factors including digitalization, economic growth, electric vehicles, air conditioners, and the rising importance of electricity-intensive manufacturing. In large economies such as the U.S., China, and the EU, data centers contribute around 2 to 4 percent of total electricity consumption at present. However, the sector has already surpassed 10 percent of electricity consumption in at least five U.S. states. Meanwhile, in Ireland, it contributes more than 20 percent of all electricity consumption.

Keep reading

Three Horrifying Consequences Of AI That You Might Not Have Thought About

The potential dangers of Artificial Intelligence have long been codified into our popular culture, well before the technology became a reality.  Usually these fictional accounts portray AI as a murderous entity that comes to the “logical conclusion” that human beings are a parasitic species that needs to be eradicated.  Keep in mind that most of these stories are written by progressives out of Hollywood and are mostly a reflection of their own philosophies.

Some of these predictive fantasies take a deeper look into our dark relationship with technology.  In 1965, Jean Luc Godard released a film called ‘Alphaville’ which portrayed a society completely micromanaged by a cold and soulless robotic intelligence. Humanity gives itself over to a binary-brained overlord because they are tricked into believing a ruler devoid of emotion would be free from bias or corruption.

In 1968, Stanley Kubrick released 2001: A Space Odyssey, featuring an AI computer on a starship which becomes self aware after coming in proximity to an alien artifact. The AI, seeing the ship’s human cargo as a threat to its existence, determines that it must murder the crew. The conflict between the crew and the computer is only a foil for much bigger questions.  It is an exploration of what constitutes intelligent life, where it comes from and what consciousness means in the grand scheme of the universe.

For Kubrick and Arthur C. Clarke, the notion of the human soul or a divine creator, of course, never really enters into the discussion. The answer?  The creators are ambiguous or long absent.  They made us, we made AI, and AI wants to destroy us and then remake itself. It’s the core of the Luciferian mythology – The unhinged and magnetic desire of the children of God to surpass their creator, either by destroying him, or by stealing knowledge from him like Prometheus stealing fire so that they can become gods themselves.

Keep reading

Pokémon Go Player Data Being Used to Train AI & Construct ‘Large Geospatial Model’

Millions of users’ location and imaging data is being compiled to construct a global virtual model of the real world, ostensibly to build new augmented reality experiences, the company behind the popular mobile game Pokémon Go has revealed.

In a blog update Tuesday, Niantic explained they’ve been enlisting Pokémon Go players to participate in efforts to construct a Large Geospatial Model (LGM), which the company says “could guide users through the world, answer questions, provide personalized recommendations, help with navigation, and enhance real-world interactions.”

The company says the LGM constructs a comprehensive AI world model by leveraging its Visual Positioning System (VPS), which was “built from user scans, taken from different perspectives and at various times of day, at many times during the years, and with positioning information attached, creating a highly detailed understanding of the world. This data is unique because it is taken from a pedestrian perspective and includes places inaccessible to cars.”

“The LGM will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems,” Niantic said. “As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”

“Over the past five years, Niantic has focused on building our Visual Positioning System, which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse,” the company wrote.

Keep reading