OpenAI whistleblower found dead in San Francisco apartment

A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week.

Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said.

The medical examiner’s office has not released his cause of death, but police officials this week said there is “currently, no evidence of foul play.”

Information he held was expected to play a key part in lawsuits against the San Francisco-based company.

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.

The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.

In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.

“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”

Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.

But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.

The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.

No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”

Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.

Keep reading

Data Centers Are Sending Global Electricity Demand Soaring

The global electricity demand is expected to grow exponentially in the coming decades, largely due to an increased demand from tech companies for new data centers to support the rollout of high-energy-consuming advanced technologies, such as artificial intelligence (AI). As governments worldwide introduce new climate policies and pump billions into alternative energy sources and clean tech, these efforts may be quashed by the increased electricity demand from data centers unless greater international regulatory action is taken to ensure that tech companies invest in clean energy sources and do not use fossil fuels for power.

The International Energy Agency (IEA) released a report in October entitled “What the data centre and AI boom could mean for the energy sector”. It showed that with investment in new data centers surging over the past two years, particularly in the U.S., the electricity demand is increasing rapidly – a trend that is set to continue. 

The report states that in the U.S., annual investment in data center construction has doubled in the past two years alone. China and the European Union are also seeing investment in data centers increase rapidly. In 2023, the overall capital investment by tech leaders Google, Microsoft, and Amazon was greater than that of the U.S. oil and gas industry, at approximately 0.5 percent of the U.S. GDP.

The tech sector expects to deploy AI technologies more widely in the coming decades as the technology is improved and becomes more ingrained in everyday life. This is just one of several advanced technologies expected to contribute to the rise in demand for power worldwide in the coming decades. 

Global aggregate electricity demand is set to increase by 6,750 terawatt-hours (TWh) by 2030, per the IEA’s Stated Policies Scenario. This is spurred by several factors including digitalization, economic growth, electric vehicles, air conditioners, and the rising importance of electricity-intensive manufacturing. In large economies such as the U.S., China, and the EU, data centers contribute around 2 to 4 percent of total electricity consumption at present. However, the sector has already surpassed 10 percent of electricity consumption in at least five U.S. states. Meanwhile, in Ireland, it contributes more than 20 percent of all electricity consumption.

Keep reading

Three Horrifying Consequences Of AI That You Might Not Have Thought About

The potential dangers of Artificial Intelligence have long been codified into our popular culture, well before the technology became a reality.  Usually these fictional accounts portray AI as a murderous entity that comes to the “logical conclusion” that human beings are a parasitic species that needs to be eradicated.  Keep in mind that most of these stories are written by progressives out of Hollywood and are mostly a reflection of their own philosophies.

Some of these predictive fantasies take a deeper look into our dark relationship with technology.  In 1965, Jean Luc Godard released a film called ‘Alphaville’ which portrayed a society completely micromanaged by a cold and soulless robotic intelligence. Humanity gives itself over to a binary-brained overlord because they are tricked into believing a ruler devoid of emotion would be free from bias or corruption.

In 1968, Stanley Kubrick released 2001: A Space Odyssey, featuring an AI computer on a starship which becomes self aware after coming in proximity to an alien artifact. The AI, seeing the ship’s human cargo as a threat to its existence, determines that it must murder the crew. The conflict between the crew and the computer is only a foil for much bigger questions.  It is an exploration of what constitutes intelligent life, where it comes from and what consciousness means in the grand scheme of the universe.

For Kubrick and Arthur C. Clarke, the notion of the human soul or a divine creator, of course, never really enters into the discussion. The answer?  The creators are ambiguous or long absent.  They made us, we made AI, and AI wants to destroy us and then remake itself. It’s the core of the Luciferian mythology – The unhinged and magnetic desire of the children of God to surpass their creator, either by destroying him, or by stealing knowledge from him like Prometheus stealing fire so that they can become gods themselves.

Keep reading

Pokémon Go Player Data Being Used to Train AI & Construct ‘Large Geospatial Model’

Millions of users’ location and imaging data is being compiled to construct a global virtual model of the real world, ostensibly to build new augmented reality experiences, the company behind the popular mobile game Pokémon Go has revealed.

In a blog update Tuesday, Niantic explained they’ve been enlisting Pokémon Go players to participate in efforts to construct a Large Geospatial Model (LGM), which the company says “could guide users through the world, answer questions, provide personalized recommendations, help with navigation, and enhance real-world interactions.”

The company says the LGM constructs a comprehensive AI world model by leveraging its Visual Positioning System (VPS), which was “built from user scans, taken from different perspectives and at various times of day, at many times during the years, and with positioning information attached, creating a highly detailed understanding of the world. This data is unique because it is taken from a pedestrian perspective and includes places inaccessible to cars.”

“The LGM will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems,” Niantic said. “As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”

“Over the past five years, Niantic has focused on building our Visual Positioning System, which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse,” the company wrote.

Keep reading

AI can now create a replica of your personality

Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.

That’s now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind, which has been published on arXiv and has not yet been peer-reviewed. 

Led by Joon Sung Park, a Stanford PhD student in computer science, the team recruited 1,000 people who varied by age, gender, race, region, education, and political ideology. They were paid up to $100 for their participation. From interviews with them, the team created agent replicas of those individuals. As a test of how well the agents mimicked their human counterparts, participants did a series of personality tests, social surveys, and logic games, twice each, two weeks apart; then the agents completed the same exercises. The results were 85% similar. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future,” Park says. 

Keep reading

US calls on Taiwan to stop supplying AI chips to China

Washington has officially demanded that Taiwan Semiconductor Manufacturing Co (TSMC), one of the world’s largest semiconductor manufacturing companies, stop supplying China with chips used in Artificial Intelligence (AI), Reuters reported on November 10. However, Washington’s pressure on China’s semiconductor industry also includes Taiwan once Donald Trump comes to power next year.

TSMC is one of the largest chip producers and cooperates with several technology companies, such as Nvidia and AMD, and specialises in integrated circuit, also known as a microchip, a small device made up of several interconnected electronic components that are etched onto a small piece of semiconductor material.

Taiwan produces about 90% of the world’s most advanced semiconductors, mostly by TSMC, and ensuring these chips do not reach China is a priority for Washington, an effort that will only intensify when Trump becomes president.

“The US ordered Taiwan Semiconductor Manufacturing Co to halt shipments of advanced chips to Chinese customers that are often used in artificial intelligence applications,” Reuters reported, citing sources familiar with the subject.

Keep reading

AI scans RNA ‘dark matter’ and uncovers 70,000 new viruses

Researchers have used artificial intelligence (AI) to uncover 70,500 viruses previously unknown to science1, many of them weird and nothing like known species. The RNA viruses were identified using metagenomics, in which scientists sample all the genomes present in the environment without having to culture individual viruses. The method shows the potential of AI to explore the ‘dark matter’ of the RNA virus universe.

Viruses are ubiquitous microorganisms that infect animals, plants and even bacteria, yet only a small fraction have been identified and described. There is “essentially a bottomless pit” of viruses to discover, says Artem Babaian, a computational virologist at the University of Toronto in Canada. Some of these viruses could cause diseases in people, which means that characterizing them could help to explain mystery illnesses, he says.

Previous studies have used machine learning to find new viruses in sequencing data. The latest study, published in Cell this week, takes that work a step further and uses it to look at predicted protein structures1.

The AI model incorporates a protein-prediction tool, called ESMFold, that was developed by researchers at Meta (formerly Facebook, headquartered in Menlo Park, California). A similar AI system, AlphaFold, was developed by researchers at Google DeepMind in London, who won the Nobel Prize in Chemistry this week.

Keep reading

The Pentagon Wants to Use AI to Create Deepfake Internet Users

The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

Keep reading

If AI Companies Are Trying To Build God, Shouldn’t They Get Our Permission First?

AI companies are on a mission to radically change our world. They’re working on building machines that could outstrip human intelligence and unleash a dramatic economic transformation on us all.

Sam Altman, the CEO of ChatGPT-maker OpenAI, has basically told us he’s trying to build a god — or “magic intelligence in the sky,” as he puts it. OpenAI’s official term for this is artificial general intelligence, or AGI. Altman says that AGI will not only “break capitalism” but also that it’s “probably the greatest threat to the continued existence of humanity.”

There’s a very natural question here: Did anyone actually ask for this kind of AI? By what right do a few powerful tech CEOs get to decide that our whole world should be turned upside down?

As I’ve written before, it’s clearly undemocratic that private companies are building tech that aims to totally change the world without seeking buy-in from the public. In fact, even leaders at the major companies are expressing unease about how undemocratic it is.

Jack Clark, the co-founder of the AI company Anthropic, told Vox last year that it’s “a real weird thing that this is not a government project.” He also wrote that there are several key things he’s “confused and uneasy” about, including, “How much permission do AI developers need to get from society before irrevocably changing society?” Clark continued:

Technologists have always had something of a libertarian streak, and this is perhaps best epitomized by the ‘social media’ and Uber et al era of the 2010s — vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general ‘move fast and break things’ philosophy of tech. Should the same be true of AI?

I’ve noticed that when anyone questions that norm of “permissionless invention,” a lot of tech enthusiasts push back. Their objections always seem to fall into one of three categories. Because this is such a perennial and important debate, it’s worth tackling each of them in turn — and why I think they’re wrong.

Keep reading

Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing.

What if there was a way to sneak malicious instructions into Claude, Copilot, or other top-name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users can’t? As it turns out, there was—and in some cases still is.

The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.

The result is a steganographic framework built into the most widely used text encoding channel.

“Mind-blowing”

“The fact that GPT 4.0 and Claude Opus were able to really understand those invisible tags was really mind-blowing to me and made the whole AI security space much more interesting,” Joseph Thacker, an independent researcher and AI engineer at Appomni, said in an interview. “The idea that they can be completely invisible in all browsers but still readable by large language models makes [attacks] much more feasible in just about every area.”

To demonstrate the utility of “ASCII smuggling”—the term used to describe the embedding of invisible characters mirroring those contained in the American Standard Code for Information Interchange—researcher and term creator Johann Rehberger created two proof-of-concept (POC) attacks earlier this year that used the technique in hacks against Microsoft 365 Copilot. The service allows Microsoft users to use Copilot to process emails, documents, or any other content connected to their accounts. Both attacks searched a user’s inbox for sensitive secrets—in one case, sales figures and, in the other, a one-time passcode.

When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server. Microsoft introduced mitigations for the attack several months after Rehberger privately reported it. The POCs are nonetheless enlightening.

Keep reading