Google Experiments With “Faster and More Adaptable” Censorship of “Harmful” Content Ahead of 2024 US Elections

In the run-up to the 2020 US presidential election, Big Tech engaged in unprecedented levels of election censorship, most notably by censoring the New York Post’s bombshell Hunter Biden laptop story just a few weeks before voters went to the polls.

And with the 2024 US presidential election less than a year away, both Google and its video sharing platform, YouTube, have confirmed that they plan to censor content they deem to be “harmful” in the run-up to the election.

In its announcement, Google noted that it already censors content that it deems to be “manipulated media” or “hate and harassment” — two broad, subjective terms that have been used by tech giants to justify mass censorship.

However, ahead of 2024, the tech giant has started using large language models (LLMs) to experiment with “building faster and more adaptable” censorship systems that will allow it to “take action even more quickly when new threats emerge.”

Google will also be censoring election-related responses in Bard (its generative AI chatbot) and Search Generative Experience (its generative AI search results).

In addition to these censorship measures, Google will be continuing its long-standing practice of artificially boosting content that it deems to be “authoritative” in Google Search and Google News. While this tactic doesn’t result in the removal of content, it can result in disfavored narratives being suppressed and drowned out by these so-called authoritative sources, which are mostly pre-selected legacy media outlets.

Keep reading

AI gives birth to AI: Scientists say machine intelligence now capable of replicating without humans

Artificial intelligence models can now create smaller AI systems without the help of a human, according to research published Friday by a group of scientists who said the project was the first of its kind.

Essentially, larger AI models — like the kind that power ChatGPT — can create smaller, more specific AI applications that can be used in everyday life, a collaboration between Aizip Inc. and scientists at the Massachusetts Institute of Technology and several University of California campuses demonstrated. Those specialized models could help improve hearing aids, monitor oil pipelines and track endangered species.

“Right now, we’re using bigger models to build the smaller models, like a bigger brother helping [its smaller] brother to improve. That’s the first step towards a bigger job of self-evolving AI,” Yan Sun, CEO of the AI tech company Aizip, told Fox News. “This is the first step in the path to show that AI models can build AI models.”

Keep reading

WORLD’S FIRST SUPERCOMPUTER THAT WILL RIVAL THE HUMAN BRAIN TO BE UNLEASHED IN 2024

Researchers in Australia are developing the world’s first supercomputer capable of simulating networks at a scale comparable to the human brain, which they say will be complete by next year.

The remarkable supercomputer, which its creators call DeepSouth, is a neuromorphic system designed to be capable of simulating the efficiency of biological processes, achieved with hardware that emulates large networks of spiking neurons at an astounding 228 trillion synaptic operations each second.

The human brain is remarkable for its efficiency. Capable of processing the equivalent of one billion-billion mathematical operations per second, known as an exaflop, each second while only using 20 watts of power, researchers have long hoped to be able to replicate the way our brains process information.

Under development by a research team at Western Sydney University, Australia, the astounding 228 trillion synaptic operations per second that DeepSouth is expected to be capable of will not only rival the capabilities of the human brain, but also pave the way toward the future creation of synthetic brains that may exceed the remarkable capabilities ours possess.

Keep reading

Deepfake Society: 74% of Americans Can’t Tell What’s Real or Fake Online Anymore

Americans believe only 37 percent of the content they see on social media is “real,” or free of edits, filters, and Photoshop. Between AI and “deepfake” videos — a survey of 2,000 adults split evenly by generation reveals that almost three-quarters (74%) can’t even tell what’s real or fake anymore.

Americans are wary of both targeted ads (14%) and influencer content (18%), but a little more than half (52%) find themselves equally likely to question the legitimacy of either one. This goes beyond social media and what’s happening online. The survey finds that while 41 percent have more difficulty determining if an item they’re looking to purchase online is “real” or “a dupe,” another 36 percent find shopping in person to be just as challenging.

While the average respondent will spend about 15 minutes determining if an item is “real,” meaning a genuine model or a knockoff, millennials take it a step further and will spend upwards of 20 minutes trying to decide.

Conducted by OnePoll on behalf of De Beers Group, results reveal that Americans already own a plethora of both real and fake products.

Keep reading

China’s online censors target short videos, artificial intelligence and ‘pessimism’ in latest crackdown

China’s internet censors are targeting short videos that spread “misleading content” as part of its latest online crackdown.

The Cyberspace Administration of China said on Tuesday that it would target short videos that spread rumours about people’s lives or promoted incorrect values such as pessimism – included for the first time – and extremism.

The campaign would also target fake videos generated using artificial intelligence, the watchdog said.

The country’s top censorship body has been running an annual online crackdown known as “Qing Lang”, which means clear and bright, since 2020.

It said this year’s crackdown would benefit people’s mental health and create a healthy space for competition that would help the short video industry develop.

The country’s best known short video platform is Douyin – the Chinese sibling of TikTok – but content is shared on a number of other Chinese social media platforms, including major players such as WeChat and Weibo.

The watchdog said one of the targets of the latest campaign would be content producers who make up stories about social minorities to win public sympathy. It would also crack down on people staging incidents, “making up fake plots and spreading panic”.

Keep reading

A Google DeepMind AI Just Discovered 380,000 New Materials. This Robot Is Cooking Them Up.

A robot chemist just teamed up with an AI brain to create a trove of new materials.

Two collaborative studies from Google DeepMind and the University of California, Berkeley, describe a system that predicts the properties of new materials—including those potentially useful in batteries and solar cells—and produces them with a robotic arm.

We take everyday materials for granted: plastic cups for a holiday feast, components in our smartphones, or synthetic fibers in jackets that keep us warm when chilly winds strike.

Scientists have painstakingly discovered roughly 20,000 different types of materials that let us build anything from computer chips to puffy coats and airplane wings. Tens of thousands more potentially useful materials are in the works. Yet we’ve only scratched the surface.

The Berkeley team developed a chef-like robot that mixes and heats ingredients, automatically transforming recipes into materials. As a “taste test,” the system, dubbed the A-Lab, analyzes the chemical properties of each final product to see if it hits the mark.

Meanwhile, DeepMind’s AI dreamed up myriad recipes for the A-Lab chef to cook. It’s a hefty list. Using a popular machine learning strategy, the AI found two million chemical structures and 380,000 new stable materials—many counter to human intuition. The work is an “order-of-magnitude” expansion on the materials that we currently know, the authors wrote.

Using DeepMind’s cookbook, A-Lab ran for 17 days and synthesized 41 out of 58 target chemicals—a win that would’ve taken months, if not years, of traditional experiments.

Together, the collaboration could launch a new era of materials science. “It’s very impressive,” said Dr. Andrew Rosen at Princeton University, who was not involved in the work.

Keep reading

INVISIBILITY COAT’ THAT HIDES HUMANS FROM AI SECURITY CAMERAS DEVELOPED BY CHINESE STUDENTS

At first glance, it may look like an ordinary, run-of-the-mill camouflage coat. However, what a group of Chinese graduate students have actually developed is a cost-effective “invisibility coat” capable of concealing the human body from AI-monitored security cameras, both day and night.

At the forgivable price of just $70 USD, the high-tech jacket, which has been dubbed the “InvisDefense coat,” was crafted by a team of four graduate students from Wuhan University in China. The real-life sci-fi coat secured the top prize at the inaugural “Huawei Cup,” a cybersecurity innovation contest sponsored by the Chinese tech giant Huawei. 

Professor Wang Zheng from the School of Computer Science oversaw the team, comprising doctoral student Wei Hui from the School of Computer Science, along with postgraduates Li Zhubo and Dai Shuyv from the School of Cyber Science and Engineering, and postgraduate Jian Zehua from the Economics and Management School.

The InvisDefense invisibility cloak involves a kind of camouflage pattern designed by a new algorithm, which challenges the efficacy of this commonly used method of AI pedestrian detection. “In layman’s terms, it means cameras can detect you but cannot determine that you are human,” according to a statement released by Wuhan University (WHU).

Keep reading

GENERATIVE AI CHATBOTS CAN ALREADY MATCH OUR CAPABILITIES IN THIS KEY AREA OF HUMAN INTELLIGENCE, STUDIES REVEAL

Recent research has revealed that Generative Artificial Intelligence (GAI) chatbots, such as ChatGPT versions 3 and 4, Studio.ai, and YouChat, have already reached creativity levels comparable to humans.

The new findings, based on several recently published empirical studies, challenge the long-standing belief that creativity is an exclusive domain of human intelligence.

For years, the general assumption has been that while AI can excel in logical and structured tasks, creativity remains a uniquely human trait. However, a recent study led by Dr. Jennifer Haase from the Department of Computer Science at Humboldt University, Berlin, and Dr. Paul H. P. Hanel from the University of Essex is turning that notion on its head by demonstrating that the line between human and AI-generated creativity is blurring.

In a series of meticulously designed tests, Dr. Haase and Dr. Hanel compared ideas generated by humans with those produced by various GAI chatbots, including prominent names like alpa.ai, Copy.ai, ChatGPT (versions 3 and 4), Studio.ai, and YouChat. 

The study employed the Alternative Uses Test (AUT), a standard measure frequently used in creativity tests. Using the AUT, 100 human participants and five general artificial intelligence chatbots were asked to generate original images of five everyday objects, such as pants, a ball, a tire, a fork, or a toothbrush. 

The human and GAI-generated responses were then measured based on their originality and fluency of ideas. 

What made the research particularly interesting was the assessments of creativity were conducted by humans (following the Consensual Assessment technique) and by an AI program designed explicitly for assessing AUT-trained large-language models. This dual approach was meant to ensure a more comprehensive evaluation of creativity, considering human intuition and AI’s analytical capabilities. 

Remarkably, when examining the results, researchers found no significant qualitative difference in the creativity of general artificial intelligence chatbots compared to humans. 

These findings are pivotal, suggesting that AI’s potential in creative domains is much broader and more profound than previously thought.

The study showed that although 9.4% of humans surpassed the most creative Generative Artificial Intelligence in the tests, GPT-4, the most recent version of ChatGPT, was notably the highest-performing AI model among those evaluated.

GPT-4’s ability to generate original and creative responses to the prompts underscores the rapid advancements in AI. It also strongly indicates that AI’s role in creative processes is not just supplementary but can be central.

Affirming these findings, another study recently published in the Journal of Creativity found that GPT-4 ranked in the top percentile for originality and fluency on the Torrance Tests of Creative Thinking. 

Keep reading

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier

In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor.

In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons. Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level:

“Spying and surveillance are different but related things,” Schneier writes. “If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.”

Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive.

“This spying is not limited to conversations on our phones or computers,” Schneier writes. “Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and ‘Hey, Google’ are already always listening; the conversations just aren’t being saved yet.”

Keep reading

Israeli AI “Assassination Factory” Plays Central Role in the Gaza War

Tel Aviv has been relying on an AI Program dubbed the Gospel to select targets in Gaza at a rapid pace. In past operations in Gaza, the IDF ran out of targets to strike in the besieged enclave.

A statement on the IDF website says the Israeli military is using the Gospel to “produce targets at a fast pace.” It continues, “Through the rapid and automatic extraction of intelligence,” the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person.”

Aviv Kochavi, former head of the IDF, said the system was first used in the May 2021 bombing campaign in Gaza.  “To put that into perspective, in the past we would produce 50 targets in Gaza per year,” he said. “Now, this machine produces 100 targets a single day, with 50% of them being attacked.”

The IDF does not disclose what it inputs into the Gospel for the program to produce a list of targets.

Thursday, the Israeli outlet +972 Magazine reported Tel Aviv was using AI to pick targets in Gaza. A former Israeli official told the +972 that the Gospel was being used as a “mass assassination factory.” The program is selecting the home of suspected low-level Hamas members for destruction. Sources told the outlet that strikes on homes can kill numerous civilians.

One source was critical of the Gospel. “I remember thinking that it was like if [Palestinian militants] would bomb all the private residences of our families when [Israeli soldiers] go back to sleep at home on the weekend,” they said.

On Friday, the Guardian expanded on the +972 article by reporting that the Gospel plays a central role in the Gaza military operations.

A former senior Israeli military source told the Guardian that operatives use a “very accurate” calculation of the number or rate of civilians fleeing a building before an impending strike. However, other experts disputed that assertion. A lawyer who advises governments on AI and compliance with humanitarian law told the outlet there was “little empirical evidence” to support the claim.

Keep reading