We Absolutely Do Not Need an FDA for AI

I don’t know whether artificial intelligence (AI) will give us a 4-hour workweek, write all of our code and emails, and drive our cars—or whether it will destroy our economy and our grasp on reality, fire our nukes, and then turn us all into gray goo. Possibly all of the above. But I’m supremely confident about one thing: No one else knows either.

November saw the public airing of some very dirty laundry at OpenAI, the artificial intelligence research organization that brought us ChatGPT, when the board abruptly announced the dismissal of CEO Sam Altman. What followed was a nerd game of thrones (assuming robots are nerdier than dragons, a debatable proposition) that consisted of a quick parade of three CEOs and ended with Altman back in charge. The shenanigans highlighted the many axes on which even the best-informed, most plugged-in AI experts disagree. Is AI a big deal, or the biggest deal? Do we owe it to future generations to pump the brakes or to smash the accelerator? Can the general public be trusted with this tech? And—the question that seems to have powered more of the recent upheaval than anything else—who the hell is in charge here?

OpenAI had a somewhat novel corporate structure, in which a nonprofit board tasked with keeping the best interests of humanity in mind sat on top of a for-profit entity with Microsoft as a significant investor. This is what happens when effective altruism and ESG do shrooms together while rolling around in a few billion dollars.

After the events of November, this particular setup doesn’t seem to have been the right approach. Altman and his new board say they’re working on the next iteration of governance alongside the next iteration of their AI chatbot. Meanwhile, OpenAI has numerous competitors—including Google’s Bard, Meta’s Llama, Anthropic’s Claude, and something Elon Musk built in his basement called Grok—several of which differentiate themselves by emphasizing different combinations of safety, profitability, and speed.

Labels for the factions proliferate. The e/acc crowd wants to “build the machine god.” Techno-optimist Marc Andreessen declared in a manifesto that “we believe intelligence is in an upward spiral—first, as more smart people around the world are recruited into the techno-capital machine; second, as people form symbiotic relationships with machines into new cybernetic systems such as companies and networks; third, as Artificial Intelligence ramps up the capabilities of our machines and ourselves.” Meanwhile Snoop Dogg is channeling AI pioneer-turned-doomer Geoffrey Hinton when he said on a recent podcast: “Then I heard the old dude that created AI saying, ‘This is not safe ’cause the AIs got their own mind and these motherfuckers gonna start doing their own shit.’ And I’m like, ‘Is we in a fucking movie right now or what?'” (Hinton told Wired, “Snoop gets it.”) And the safetyists just keep shouting the word guardrails. (Emmett Shear, who was briefly tapped for the OpenAI CEO spot, helpfully tweeted this faction compass for the uninitiated.)

Keep reading

ON THE EVE OF AN A.I. ‘EXTINCTION RISK’? IN 2023, ADVANCEMENTS IN A.I. SIGNALED PROMISE, AND PROMPTED WARNINGS FROM GLOBAL LEADERS

In the field of artificial intelligence, OpenAI, led by CEO Sam Altman, along with the company’s ChatGTP chatbot and its mysterious Q* AI model, have emerged as leading forces within Silicon Valley.

While advancements in AI may hold the potential for positive future developments, OpenAI’s Q* and other AI platforms have also led to concerns among government officials worldwide, who increasingly warn about possible threats to humanity that could arise from such technologies.

2023’S BIGGEST AI UPSET

Among the year’s most significant controversies involving AI, in November Altman was released from his duties as CEO of OpenAI, only to be reinstated 12 days later amidst a drama that left several questions that, to date, remain unresolved.

On November 22, just days after Altman’s temporary ousting as the CEO of OpenAI, two people with knowledge of the situation told Reuters that “several staff researchers wrote a letter to the board of directors,” which had reportedly warned about “a powerful artificial intelligence discovery that they said could threaten humanity,” the report stated.

In the letter addressed to the board, the researchers highlighted the capabilities and potential risks associated with artificial intelligence. Although the sources did not outline specific safety concerns, some of the researchers who authored the letter to OpenAI’s board had reportedly raised concerns involving an AI scientist team comprised of two earlier “Code Gen” and “Math Gen” teams, warning that the new developments that aroused concern among company employees involved aims to upgrade the AI’s reasoning abilities and ability to engage in scientific tasks.

In a surprising turn of events that occurred two days earlier on November 20, Microsoft announced its decision to onboard Sam Altman and Greg Brockman, the president of OpenAI and one of its co-founders, who had resigned in solidarity with Sam Altman. Microsoft said at the time that the duo was set to run an advanced research lab for the company.

Four days later Sam Altman was reinstated as the CEO of OpenAI after 700 of the company’s employees threatened to quit and join Microsoft. In a recent interview with Altman, he disclosed his initial response to his invitation to return following his dismissal, saying it “took me a few minutes to snap out of it and get over the ego and emotions to then be like, ‘Yeah, of course I want to do that’,” Altman told The Verge.

“Obviously, I really loved the company and had poured my life force into this for the last four and a half years full-time, but really longer than that with most of my time. And we’re making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI,” Altman said.

But the AI soap opera doesn’t stop there. On November 30, Altman announced that Microsoft would join OpenAI’s board. The tech giant, holding a 49 percent ownership stake in the company after a $13 billion investment, will assume a non-voting observer position on the board. Amidst all this turmoil, questions remained about what, precisely, the new Q* model is, and why it had so many OpenAI researchers concerned.

Keep reading

Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

Keep reading

MACHINE LEARNING BREAKTHROUGH CREATES FIRST EVER AUTOMATED AI SCIENTIST

Carnegie Mellon University researchers have pioneered an artificially intelligent system, Coscientist, that can autonomously develop scientific research and experimentation. Published in the journal Nature, this non-organic intelligent system, developed by Assistant Professor Gabe Gomes and doctoral students Daniil Boiko and Robert MacKnight, is the first to design, plan, and execute a chemistry experiment autonomously. 

Utilizing large language models (LLMs) like OpenAI’s GPT-4 and Anthropic’s Claude, Coscientist demonstrates an innovative approach to conducting research through a human-machine partnership​​​​.

Coscientist’s design enables it to perform various tasks, from planning chemical syntheses using public data to controlling liquid handling instruments and solving optimization problems by analyzing previously collected data. Its architecture consists of multiple modules, including web and documentation search, code execution, and experiment automation, coordinated by a central module called ‘Planner,’ a GPT-4 chat completion instance. This structure allows Coscientist to operate semi-autonomously, integrating multiple data sources and hardware modules for complex scientific tasks​​.

“We anticipate that intelligent agent systems for autonomous scientific experimentation will bring tremendous discoveries, unforeseen therapies, and new materials,” the research team wrote in the paper. “While we cannot predict what those discoveries will be, we hope to see a new way of conducting research given by the synergetic partnership between humans and machines.”

The system’s capabilities were tested across different tasks, demonstrating its ability to precisely plan and execute experiments. For instance, Coscientist outperformed other models like GPT-3.5 and Falcon 40B in synthesizing compounds, particularly complex ones like ibuprofen and nitroaniline. This highlighted the importance of using advanced LLMs for accurate and efficient experiment planning​​.

A key aspect of Coscientist is its ability to understand and utilize technical documentation, which has always been a challenge in integrating LLMs with laboratory automation. By interpreting technical documentation, Coscientist enhances its performance in automating experiments. This capability was extended to a more diverse robotic ecosystem, such as the Emerald Cloud Lab (ECL), demonstrating Coscientist’s adaptability and potential for broad scientific application​​.

Keep reading

SCARY AI CAN LOOK AT PHOTOS AND FIGURE OUT EXACTLY WHERE THEY WERE TAKEN

A trio of Stanford graduate students have made a powerful AI that can guess the location of a wide variety of photos with remarkable accuracy.

Known as Predicting Image Geolocations (PIGEON), the AI is trained on Google Street View and can effortlessly pinpoint where photos were taken, even outwitting some of the best human “geoguessers.”

The developers claim their AI can correctly guess the country where a photo was taken 95 percent of the time, and usually within a startling 25 miles of the real location.

They also note some of its potentially game-changing applications, such as assisting in biological surveys or quickly identifying roads with downed power lines.

For all its very useful potential, though, it sounds like a privacy nightmare waiting to happen, with some experts fearing the abuse of such AI tools in the hands of the wrong people.

“From a privacy point of view, your location can be a very sensitive set of information,” Jay Stanley at the American Civil Liberties Union told NPR.

Keep reading

Google Experiments With “Faster and More Adaptable” Censorship of “Harmful” Content Ahead of 2024 US Elections

In the run-up to the 2020 US presidential election, Big Tech engaged in unprecedented levels of election censorship, most notably by censoring the New York Post’s bombshell Hunter Biden laptop story just a few weeks before voters went to the polls.

And with the 2024 US presidential election less than a year away, both Google and its video sharing platform, YouTube, have confirmed that they plan to censor content they deem to be “harmful” in the run-up to the election.

In its announcement, Google noted that it already censors content that it deems to be “manipulated media” or “hate and harassment” — two broad, subjective terms that have been used by tech giants to justify mass censorship.

However, ahead of 2024, the tech giant has started using large language models (LLMs) to experiment with “building faster and more adaptable” censorship systems that will allow it to “take action even more quickly when new threats emerge.”

Google will also be censoring election-related responses in Bard (its generative AI chatbot) and Search Generative Experience (its generative AI search results).

In addition to these censorship measures, Google will be continuing its long-standing practice of artificially boosting content that it deems to be “authoritative” in Google Search and Google News. While this tactic doesn’t result in the removal of content, it can result in disfavored narratives being suppressed and drowned out by these so-called authoritative sources, which are mostly pre-selected legacy media outlets.

Keep reading

AI gives birth to AI: Scientists say machine intelligence now capable of replicating without humans

Artificial intelligence models can now create smaller AI systems without the help of a human, according to research published Friday by a group of scientists who said the project was the first of its kind.

Essentially, larger AI models — like the kind that power ChatGPT — can create smaller, more specific AI applications that can be used in everyday life, a collaboration between Aizip Inc. and scientists at the Massachusetts Institute of Technology and several University of California campuses demonstrated. Those specialized models could help improve hearing aids, monitor oil pipelines and track endangered species.

“Right now, we’re using bigger models to build the smaller models, like a bigger brother helping [its smaller] brother to improve. That’s the first step towards a bigger job of self-evolving AI,” Yan Sun, CEO of the AI tech company Aizip, told Fox News. “This is the first step in the path to show that AI models can build AI models.”

Keep reading

WORLD’S FIRST SUPERCOMPUTER THAT WILL RIVAL THE HUMAN BRAIN TO BE UNLEASHED IN 2024

Researchers in Australia are developing the world’s first supercomputer capable of simulating networks at a scale comparable to the human brain, which they say will be complete by next year.

The remarkable supercomputer, which its creators call DeepSouth, is a neuromorphic system designed to be capable of simulating the efficiency of biological processes, achieved with hardware that emulates large networks of spiking neurons at an astounding 228 trillion synaptic operations each second.

The human brain is remarkable for its efficiency. Capable of processing the equivalent of one billion-billion mathematical operations per second, known as an exaflop, each second while only using 20 watts of power, researchers have long hoped to be able to replicate the way our brains process information.

Under development by a research team at Western Sydney University, Australia, the astounding 228 trillion synaptic operations per second that DeepSouth is expected to be capable of will not only rival the capabilities of the human brain, but also pave the way toward the future creation of synthetic brains that may exceed the remarkable capabilities ours possess.

Keep reading

Deepfake Society: 74% of Americans Can’t Tell What’s Real or Fake Online Anymore

Americans believe only 37 percent of the content they see on social media is “real,” or free of edits, filters, and Photoshop. Between AI and “deepfake” videos — a survey of 2,000 adults split evenly by generation reveals that almost three-quarters (74%) can’t even tell what’s real or fake anymore.

Americans are wary of both targeted ads (14%) and influencer content (18%), but a little more than half (52%) find themselves equally likely to question the legitimacy of either one. This goes beyond social media and what’s happening online. The survey finds that while 41 percent have more difficulty determining if an item they’re looking to purchase online is “real” or “a dupe,” another 36 percent find shopping in person to be just as challenging.

While the average respondent will spend about 15 minutes determining if an item is “real,” meaning a genuine model or a knockoff, millennials take it a step further and will spend upwards of 20 minutes trying to decide.

Conducted by OnePoll on behalf of De Beers Group, results reveal that Americans already own a plethora of both real and fake products.

Keep reading

China’s online censors target short videos, artificial intelligence and ‘pessimism’ in latest crackdown

China’s internet censors are targeting short videos that spread “misleading content” as part of its latest online crackdown.

The Cyberspace Administration of China said on Tuesday that it would target short videos that spread rumours about people’s lives or promoted incorrect values such as pessimism – included for the first time – and extremism.

The campaign would also target fake videos generated using artificial intelligence, the watchdog said.

The country’s top censorship body has been running an annual online crackdown known as “Qing Lang”, which means clear and bright, since 2020.

It said this year’s crackdown would benefit people’s mental health and create a healthy space for competition that would help the short video industry develop.

The country’s best known short video platform is Douyin – the Chinese sibling of TikTok – but content is shared on a number of other Chinese social media platforms, including major players such as WeChat and Weibo.

The watchdog said one of the targets of the latest campaign would be content producers who make up stories about social minorities to win public sympathy. It would also crack down on people staging incidents, “making up fake plots and spreading panic”.

Keep reading