“Disinformation Doomsday Scenario”: AI-Powered Propaganda Is The Latest Threat To Humanity (That Must Be Censored)

The Trump-Russia hoax was one of the most notable disinformation operations in modern history. A major component of the hoax was the notion that Russia had influenced the 2016 US election through disinformation, and tricked the American public into electing Donald Trump.

In the fullness of time of course, it was revealed that the Clinton campaign, Obama administration, and their allies in corporate media had peddled fabricated information themselves. Yet, the threat of ‘disinformation’ has blossomed into an entire ecosystem of collaboration between governments and private think tanks which has been used to censor free speech around the globe. 

To that end, the World Economic Forum from has now declared “Disinformation” to be the world’s greatest threat according to their 2024 “Global Risks Report,” which will obviously require more control over free speech.

Keep reading

Pentagon Aims to Create a Human-Machine Soldier as Part of Dangerous New Artificial Intelligence Race

“In 1962, J.C.R. Licklider created the US Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA). His vision, published two years earlier in his seminal work Man–Computer Symbiosis (Licklider 1960), heralded an ambitious, and ultimately successful, push to develop artificial intelligence (AI) technologies. The Agency, now called DARPA with the D emphasizing its focus on defense applications, has supported AI research, as popularity has ebbed and flowed, over the past 60 years.”[1]

The Pentagon has been at the forefront of researching and developing artificial intelligence technologies for use in warfare and spying since the early 1960s, primarily through the Defense Advanced Research Projects Agency (DARPA).[2] According to the Brookings Institution, 87% of the value of federal contracts over the five years 2017-2022 that had the term “artificial intelligence” in the contract description were with the Department of Defense.[3] This article reviews the Pentagon’s current application of AI technologies.[4]

Keep reading

DATA SOLUTIONS PROVIDER TELUS INTERNATIONAL IS PAYING $50 FOR IMAGES OF KIDS TO TRAIN GOOGLE’S AI

In a recent initiative, Google and TELUS International, a subsidiary of the Canadian tech conglomerate TELUS, have collaborated to collect biometric data from children for age verification purposes. This project, running from November 2023 to January 2024, involved parents filming their children’s faces, capturing details such as eyelid shape, skin tone, and facial geometry. Parents who participated were paid $50 per child.

First reported by 404media, the project requested that parents take 11 short videos of their children while wearing things like face masks or hats. Another request was for children’s faces with no coverings at all. Each video must be less than 40 seconds, and participants were expected to spend 30 to 45 minutes on the task.

According to the summary document, which has now been taken down, a TELUS International moderator would be on a call while the parent took these videos of the child.

According to TELUS International, the purpose of this project was to capture a diverse range of biometric data to ensure that their customer’s services and products are representative of various demographics. Google told 404media that the goal was to enhance authentication methods, thus providing more secure tools for users. 

“As part of our commitment to delivering age-appropriate experiences and to comply with laws and regulations around the world, we’re exploring ways to help our users verify their age. Last year, TELUS helped us find volunteers for a project exploring whether this could be done via selfies. From there, Google collected videos and images of faces, clearly explaining how the content would be used, and, as with all research involving minors, we required parental consent for participants under the age of 18. We’ve also put strict privacy protections in place, including limiting the amount of time the data will be retained and providing all participants the option to delete their data at any time,” Google told 404media in a statement.

While this aligns with Google’s broader commitment to developing responsible and ethical facial recognition technology, the project has raised significant concerns regarding children’s privacy and consent.

Parents had to consent to Google and TELUS International collecting their child’s personal and biometric information in order to participate. This included the shape of their eyelids, the color of their skin and their “facial geometry.” According to the TELUS International summary, Google would then keep the data for five years at most, which for some participants, would be into their early adulthood.

Keep reading

Half Of All Skills Will Be Outdated Within Two Years, Study Suggests

Executives believe nearly half of the skills that exist in today’s workforce won’t be relevant just two years from now, thanks to artificial intelligence. And a lot of that includes their own skills. This startling proclamation came out of a recent survey of 800 executives and 800 employees released by edX, an online education platform.

The executives estimate that nearly half (49%) of the skills that exist in their workforce today won’t be relevant in 2025. The same number, 47%, believe their workforces are unprepared for the future workplace. Identifying skills shortages is not a surprising result to come out of an educational platform provider, but the short timespan is an eye-opener.

Executives in the survey estimate that within the next five years, their organizations will eliminate over half (56%) of entry-level knowledge worker roles because of AI. What’s more, 79% of executives predict that entry-level knowledge worker jobs will no longer exist as AI creates an entirely new suite of roles for employees entering the workforce. On top of that, 56% say their own roles will be “completely” or “partially” replaced by AI.

However, there are industry leaders who are skeptical of such heavy-handed doom-laden predictions. “In my view, the immediate impact of AI on career goals is likely to be minimal,” says Richard Jefts, executive vice president and general manager at HCL Software. “While many companies claim to be leveraging AI, the reality is that most are still in the early stages of adoption.” Expect more of a longer-term impact on careers as AI matures, he says.

Keep reading

AI Watermarking Is Advocated by Biden’s Advisory Committee Member, Raising Concerns for Parody and Memes

The Biden administration doesn’t seem quite certain how to do it – but it would clearly like to see AI watermarking implemented as soon as possible, despite the idea being marred by many misgivings.

And, even despite what some reports admit is a lack of consensus on “what digital watermark is.” Standards and enforcement regulation are also missing. As has become customary, where the government is constrained or insufficiently competent, it effectively enlists private companies.

With the standards problem, these seem to none other than tech dinosaur Adobe, and China’s TikTok.

It’s hardly a conspiracy theory to think the push mostly has to do with the US presidential election later this year, as watermarking of this kind can be “converted” from its original stated purpose – into a speech-suppression tool.

The publicly presented argument in favor is obviously not quite that, although one can read between the lines. Namely – AI watermarking is promoted as a “key component” in combating misinformation, deepfakes included.

And this is where perfectly legal and legitimate genres like parody and memes could suffer from AI watermarking-facilitated censorship.

Spearheading the drive, such as it is, is Biden’s National Artificial Intelligence Advisory Committee and now one of its members, Carnegie Mellon University’s Ramayya Krishnan, admits there are “enforcement issues” – but is still enthusiastic about the possibility of using technology that “labels how content was made.”

From the Committee’s point of view, a companion AI tool would be a cherry on top.

However, there’s still no actual cake. Different companies are developing watermarking which can be put in three categories: visible, invisible (i.e., visible only to algorithms), and based on cryptographic metadata.

Keep reading

We Absolutely Do Not Need an FDA for AI

I don’t know whether artificial intelligence (AI) will give us a 4-hour workweek, write all of our code and emails, and drive our cars—or whether it will destroy our economy and our grasp on reality, fire our nukes, and then turn us all into gray goo. Possibly all of the above. But I’m supremely confident about one thing: No one else knows either.

November saw the public airing of some very dirty laundry at OpenAI, the artificial intelligence research organization that brought us ChatGPT, when the board abruptly announced the dismissal of CEO Sam Altman. What followed was a nerd game of thrones (assuming robots are nerdier than dragons, a debatable proposition) that consisted of a quick parade of three CEOs and ended with Altman back in charge. The shenanigans highlighted the many axes on which even the best-informed, most plugged-in AI experts disagree. Is AI a big deal, or the biggest deal? Do we owe it to future generations to pump the brakes or to smash the accelerator? Can the general public be trusted with this tech? And—the question that seems to have powered more of the recent upheaval than anything else—who the hell is in charge here?

OpenAI had a somewhat novel corporate structure, in which a nonprofit board tasked with keeping the best interests of humanity in mind sat on top of a for-profit entity with Microsoft as a significant investor. This is what happens when effective altruism and ESG do shrooms together while rolling around in a few billion dollars.

After the events of November, this particular setup doesn’t seem to have been the right approach. Altman and his new board say they’re working on the next iteration of governance alongside the next iteration of their AI chatbot. Meanwhile, OpenAI has numerous competitors—including Google’s Bard, Meta’s Llama, Anthropic’s Claude, and something Elon Musk built in his basement called Grok—several of which differentiate themselves by emphasizing different combinations of safety, profitability, and speed.

Labels for the factions proliferate. The e/acc crowd wants to “build the machine god.” Techno-optimist Marc Andreessen declared in a manifesto that “we believe intelligence is in an upward spiral—first, as more smart people around the world are recruited into the techno-capital machine; second, as people form symbiotic relationships with machines into new cybernetic systems such as companies and networks; third, as Artificial Intelligence ramps up the capabilities of our machines and ourselves.” Meanwhile Snoop Dogg is channeling AI pioneer-turned-doomer Geoffrey Hinton when he said on a recent podcast: “Then I heard the old dude that created AI saying, ‘This is not safe ’cause the AIs got their own mind and these motherfuckers gonna start doing their own shit.’ And I’m like, ‘Is we in a fucking movie right now or what?'” (Hinton told Wired, “Snoop gets it.”) And the safetyists just keep shouting the word guardrails. (Emmett Shear, who was briefly tapped for the OpenAI CEO spot, helpfully tweeted this faction compass for the uninitiated.)

Keep reading

ON THE EVE OF AN A.I. ‘EXTINCTION RISK’? IN 2023, ADVANCEMENTS IN A.I. SIGNALED PROMISE, AND PROMPTED WARNINGS FROM GLOBAL LEADERS

In the field of artificial intelligence, OpenAI, led by CEO Sam Altman, along with the company’s ChatGTP chatbot and its mysterious Q* AI model, have emerged as leading forces within Silicon Valley.

While advancements in AI may hold the potential for positive future developments, OpenAI’s Q* and other AI platforms have also led to concerns among government officials worldwide, who increasingly warn about possible threats to humanity that could arise from such technologies.

2023’S BIGGEST AI UPSET

Among the year’s most significant controversies involving AI, in November Altman was released from his duties as CEO of OpenAI, only to be reinstated 12 days later amidst a drama that left several questions that, to date, remain unresolved.

On November 22, just days after Altman’s temporary ousting as the CEO of OpenAI, two people with knowledge of the situation told Reuters that “several staff researchers wrote a letter to the board of directors,” which had reportedly warned about “a powerful artificial intelligence discovery that they said could threaten humanity,” the report stated.

In the letter addressed to the board, the researchers highlighted the capabilities and potential risks associated with artificial intelligence. Although the sources did not outline specific safety concerns, some of the researchers who authored the letter to OpenAI’s board had reportedly raised concerns involving an AI scientist team comprised of two earlier “Code Gen” and “Math Gen” teams, warning that the new developments that aroused concern among company employees involved aims to upgrade the AI’s reasoning abilities and ability to engage in scientific tasks.

In a surprising turn of events that occurred two days earlier on November 20, Microsoft announced its decision to onboard Sam Altman and Greg Brockman, the president of OpenAI and one of its co-founders, who had resigned in solidarity with Sam Altman. Microsoft said at the time that the duo was set to run an advanced research lab for the company.

Four days later Sam Altman was reinstated as the CEO of OpenAI after 700 of the company’s employees threatened to quit and join Microsoft. In a recent interview with Altman, he disclosed his initial response to his invitation to return following his dismissal, saying it “took me a few minutes to snap out of it and get over the ego and emotions to then be like, ‘Yeah, of course I want to do that’,” Altman told The Verge.

“Obviously, I really loved the company and had poured my life force into this for the last four and a half years full-time, but really longer than that with most of my time. And we’re making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI,” Altman said.

But the AI soap opera doesn’t stop there. On November 30, Altman announced that Microsoft would join OpenAI’s board. The tech giant, holding a 49 percent ownership stake in the company after a $13 billion investment, will assume a non-voting observer position on the board. Amidst all this turmoil, questions remained about what, precisely, the new Q* model is, and why it had so many OpenAI researchers concerned.

Keep reading

Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

Keep reading

MACHINE LEARNING BREAKTHROUGH CREATES FIRST EVER AUTOMATED AI SCIENTIST

Carnegie Mellon University researchers have pioneered an artificially intelligent system, Coscientist, that can autonomously develop scientific research and experimentation. Published in the journal Nature, this non-organic intelligent system, developed by Assistant Professor Gabe Gomes and doctoral students Daniil Boiko and Robert MacKnight, is the first to design, plan, and execute a chemistry experiment autonomously. 

Utilizing large language models (LLMs) like OpenAI’s GPT-4 and Anthropic’s Claude, Coscientist demonstrates an innovative approach to conducting research through a human-machine partnership​​​​.

Coscientist’s design enables it to perform various tasks, from planning chemical syntheses using public data to controlling liquid handling instruments and solving optimization problems by analyzing previously collected data. Its architecture consists of multiple modules, including web and documentation search, code execution, and experiment automation, coordinated by a central module called ‘Planner,’ a GPT-4 chat completion instance. This structure allows Coscientist to operate semi-autonomously, integrating multiple data sources and hardware modules for complex scientific tasks​​.

“We anticipate that intelligent agent systems for autonomous scientific experimentation will bring tremendous discoveries, unforeseen therapies, and new materials,” the research team wrote in the paper. “While we cannot predict what those discoveries will be, we hope to see a new way of conducting research given by the synergetic partnership between humans and machines.”

The system’s capabilities were tested across different tasks, demonstrating its ability to precisely plan and execute experiments. For instance, Coscientist outperformed other models like GPT-3.5 and Falcon 40B in synthesizing compounds, particularly complex ones like ibuprofen and nitroaniline. This highlighted the importance of using advanced LLMs for accurate and efficient experiment planning​​.

A key aspect of Coscientist is its ability to understand and utilize technical documentation, which has always been a challenge in integrating LLMs with laboratory automation. By interpreting technical documentation, Coscientist enhances its performance in automating experiments. This capability was extended to a more diverse robotic ecosystem, such as the Emerald Cloud Lab (ECL), demonstrating Coscientist’s adaptability and potential for broad scientific application​​.

Keep reading

SCARY AI CAN LOOK AT PHOTOS AND FIGURE OUT EXACTLY WHERE THEY WERE TAKEN

A trio of Stanford graduate students have made a powerful AI that can guess the location of a wide variety of photos with remarkable accuracy.

Known as Predicting Image Geolocations (PIGEON), the AI is trained on Google Street View and can effortlessly pinpoint where photos were taken, even outwitting some of the best human “geoguessers.”

The developers claim their AI can correctly guess the country where a photo was taken 95 percent of the time, and usually within a startling 25 miles of the real location.

They also note some of its potentially game-changing applications, such as assisting in biological surveys or quickly identifying roads with downed power lines.

For all its very useful potential, though, it sounds like a privacy nightmare waiting to happen, with some experts fearing the abuse of such AI tools in the hands of the wrong people.

“From a privacy point of view, your location can be a very sensitive set of information,” Jay Stanley at the American Civil Liberties Union told NPR.

Keep reading