Ads for AI sex workers are flooding Instagram and TikTok

Facebook, Instagram and TikTok have tried to keep a tight lid on sexualized content in recent years, banning nudity outright in almost all cases, kicking off sex workers and even cracking down on some artists and educators who speak frankly about sexual health and safety. 

But a new kind of sexualized content has lately been getting through their moderation systems: ads for scantily clad and dirty-talking chatbots, powered by what their creators say is artificial intelligence. 

Dozens of tech startups have been running explicit advertisements on TikTok, Instagram and Facebook in recent months for apps that promote not-safe-for-work experiences. The ads promise “NSFW pics,” custom pinup girls and chats with “no censoring,” and many of them feature digitally created potential “girlfriends” with large breasts and tight clothing. 

Some ads use memes that include popular children’s TV characters, such as SpongeBob SquarePants, Bart Simpson or Cookie Monster, to promote apps with which people can create “NSFW pics.” Others feature digitally created girls who appear to be teenagers or younger, sometimes in the style of anime. 

Keep reading

Bizarre AI app stirs controversy as it lets you text ‘Jesus and Satan’

Artificial intelligence has brought along a wide range of weird and wonderful creations since its sudden growth in popularity, with some proving more useful than others as the technology develops. 

The AI app Text With Jesus was launched in July by Catloaf Sofware, which according to developers is for: “’devoted Christians seeking a deeper connection with the Bible’s most iconic figures.”

Using the use of ChatGPT’s AI technology, users are reportedly able to interact with chatbots that represent biblical characters to provide an instant messaging service that uses bible studies to reference from.

Most notable figures include Jesus, Mary, Joseph, and even more shockingly the devil himself, Satan.

After the AI application was released to the public, the bizarre service has since gone viral.

Thousands of people have downloaded the app to their phones, whereby due to its sensitive religious theme, has also naturally divided opinion.

Most of the reviews for the app are positive, with a 4.2 score out of a possible 5, overall rating according to Apple.

Keep reading

Armed with traffic cones, protesters are immobilizing driverless cars

Two people dressed in dark colors and wearing masks dart into a busy street on a hill in San Francisco. One of them hauls a big orange traffic cone. They sprint toward a driverless car and quickly set the cone on the hood.

The vehicle’s side lights burst on and start flashing orange. And then, it sits there immobile.

“All right, looks good,” one of them says after making sure no one is inside. “Let’s get out of here.” They hop on e-bikes and pedal off.

All it takes to render the technology-packed self-driving car inoperable is a traffic cone. If all goes according to plan, it will stay there, frozen, until someone comes and removes it.

An anonymous activist group called Safe Street Rebel is responsible for this so-called coning incident and dozens of others over the past few months. The group’s goal is to incapacitate the driverless cars roaming San Francisco’s streets as a protest against the city being used as a testing ground for this emerging technology.

Over the past couple of years, driverless cars have become ubiquitous throughout San Francisco. It began with human safety drivers on board who were there to make sure everything ran smoothly. And then, many cars started operating with no humans at all.

They’re mostly run by Cruise, which is owned by GM, and Waymo, which is owned by Google parent company Alphabet. Both companies have poured billions of dollars into developing these autonomous vehicles. Neither Cruise nor Waymo responded to questions about why the cars can be disabled by traffic cones.

Keep reading

AI-generated art cannot be copyrighted, rules a US federal judge

United States District Court Judge Beryl A. Howell ruled on Friday that AI-generated artwork can’t be copyrighted, as noted by The Hollywood Reporter. She was presiding over a lawsuit against the US Copyright Office after it refused a copyright to Stephen Thaler for an AI-generated image made with the Creativity Machine algorithm he’d created.

Thaler had tried multiple times to copyright the image “as a work-for-hire to the owner of the Creativity Machine,” which would have listed the author as the creator of the work and Thaler as the artwork’s owner, but he was repeatedly rejected.

After the Office’s final rejection last year, Thaler sued the Office, claiming its denial was “arbitrary, capricious … and not in accordance with the law,” but Judge Howell didn’t see it that way. In her decision, Judge Howell wrote that copyright has never been granted to work that was “absent any guiding human hand,” adding that “human authorship is a bedrock requirement of copyright.”

Keep reading

AI-enabled brain implant helps patient regain feeling and movement

Keith Thomas from New York was involved in a driving accident back in 2020 that injured his spine’s C4 and C5 vertebrae, leading to a total loss in feeling and movement from the chest down. Recently, though, Thomas had been able to move his arm at will and feel his sister hold his hand, thanks to the AI brain implant technology developed by the Northwell Health’s Feinstein Institute of Bioelectronic Medicine. 

The research team first spent months mapping his brain with MRIs to pinpoint the exact parts of his brain responsible for arm movements and the sense of touch in his hands. Then, four months ago, surgeons performed a 15-hour procedure to implant microchips into his brain — Thomas was even awake for some parts so he could tell them what sensations he was feeling in his hand as they probed parts of the organ. 

While the microchips are inside his body, the team also installed external ports on top of his head. Those ports connect to a computer with the artificial intelligence (AI) algorithms that the team developed to interpret his thoughts and turn them into action. The researchers call this approach “thought-driven therapy,” because it all starts with the patient’s intentions. If he thinks of wanting to move his hand, for instance, his brain implant sends signals to the computer, which then sends signals to the electrode patches on his spine and hand muscles in order to stimulate movement. They attached sensors to his fingertips and palms, as well, to stimulate sensation. 

Thanks to this system, he was able to move his arm at will and feel his sister holding his hand in the lab. While he needed to be attached to the computer for those milestones, the researchers say Thomas has shown signs of recovery even when the system is off. His arm strength has apparently “more than doubled” since the study began, and his forearm and wrist could now feel some new sensations. If all goes well, the team’s thought-driven therapy could help him regain more of his sense of touch and mobility. 

While the approach has a ways to go, the team behind it is hopeful that it could change the lives of people living with paralysis. Chad Bouton, the technology’s developer and the principal investigator of the clinical trial, said:

“This is the first time the brain, body and spinal cord have been linked together electronically in a paralyzed human to restore lasting movement and sensation. When the study participant thinks about moving his arm or hand, we ‘supercharge’ his spinal cord and stimulate his brain and muscles to help rebuild connections, provide sensory feedback, and promote recovery. This type of thought-driven therapy is a game-changer. Our goal is to use this technology one day to give people living with paralysis the ability to live fuller, more independent lives.”

Keep reading

AI program can steal your password by listening to the sounds your keyboard makes when you type it

Research published by Cornell University showed that scientists programmed an artificial intelligence system that listened to people typing their passwords and was able to correctly identify the keys with 95% accuracy.

The group programmed an AI system to listen to a typed password on MacBook Pro keys over both a phone and a Zoom call, according to Daily Fetched.

The AI model was trained by pressing each of the MacBook Pro’s 36 keys 25 times each and recording the sounds. The sounds were fed into the AI so it could correctly identify each key.

Over the phone, the program correctly identified the keys with 95% accuracy, while over Zoom the number dropped slightly to 93%. The phone was placed about six and a half inches away from the keyboard, according to the Daily Mail.

“When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95 percent, the highest accuracy seen without the use of a language model,” the study reportedly said.

Keep reading

USAF Conducts First-AI Flight With Stealth Drone

The Air Force Research Laboratory (AFRL) completed the first-ever flight of an AFRL-developed stealth drone powered by artificial intelligence software.

On July 25, the machine-learning-trained, artificial intelligence-powered XQ-58A Valkyrie flew a three-hour sortie at Florida’s Eglin Air Force Base.

“The mission proved out a multi-layer safety framework on an AI/ML-flown uncrewed aircraft and demonstrated an AI/ML agent solving a tactically relevant “challenge problem” during airborne operations,” said Col. Tucker Hamilton, chief, of AI Test and Operations, for the Department of the USAF.

Hamilton continued, “This sortie officially enables the ability to develop AI/ML agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to other autonomy programs.”

Eglin has become the testing ground for advanced autonomous systems within the USAF. Last November, the service received two Valkyrie stealth drones assigned to the 40th Flight Test Squadron.

In past press releases, AFRL describes the Valkyrie as a “high-speed, long-range, low-cost unmanned platform designed to offer maximum utility at minimum cost.”

Keep reading

Deepfake Fraud Surges More Than 1000%, Insiders Say It’s Just The Beginning

As the line between fact and fiction gets harder to distinguish, online criminals need just two hours to create a realistic, computer-generated “deepfake” product that can ruin someone’s life.

The surge in popularity of hyper-realistic photos, audio, and videos developed with artificial intelligence (AI)—commonly known as deepfakes—has become an internet sensation.

It’s also giving cyber villains an edge in the crime world.

Between 2022 and the first quarter of this year, deepfake use in fraud catapulted 1,200 percent in the United States alone.

Though it’s not just an American problem.

In the same analysis, deepfakes used for scam purposes exploded in Canada, Germany, and the United Kingdom. In the study, the United States accounted for 4.3 percent of global deepfake fraud cases.

Meanwhile, AI experts and cybercrime investigators say we’re just at the tip of the iceberg. The rabbit hole of deepfake fraud potential just keeps going.

“I believe the No. 1 incentive for cyber criminals to commit cybercrime is law enforcement and their inability to keep up,” Michael Roberts told The Epoch Times.

Mr. Roberts is a professional investigator and the founder of the pioneer company Rexxfield, which helps victims of web-based attacks.

He also started PICDO, a cyber crime disruption organization, and has run counter-hacking education for branches of the U.S. and Australian militaries as well as NATO.

Mr. Roberts said legal systems in the Western world are “hopelessly overwhelmed” by online fraud cases, many of which include deepfake attacks. Moreover, the cases that get investigated without hiring a private firm are cherry-picked.

And even then, it [the case] doesn’t get resolved,” he said.

The market for deepfake detection was valued at $3.86 billion dollars in 2020 and is expected to grow 42 percent annually through 2026, according to an HSRC report.

Keep reading

AI search of Neanderthal proteins resurrects ‘extinct’ antibiotics

Bioengineers have used artificial intelligence (AI) to bring molecules back from the dead1.

To perform this molecular ‘de-extinction’, the researchers applied computational methods to data about proteins from both modern humans (Homo sapiens) and our long-extinct relatives, Neanderthals (Homo neanderthalensis) and Denisovans. This allowed the authors to identify molecules that can kill disease-causing bacteria — and that could inspire new drugs to treat human infections.

“We’re motivated by the notion of bringing back molecules from the past to address problems that we have today,” says Cesar de la Fuente, a co-author of the study and a bioengineer at the University of Pennsylvania in Philadelphia. The study was published on 28 July in Cell Host & Microbe1.

Keep reading

This Disinformation Is Just for You

IT’S NOW WELL understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.

When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.

“Generative AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”

Rather than producing just a handful of articles a day,  Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”

Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign. Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.

“You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.

Keep reading