Sexy AI-generated influencers are silently swarming social media with fake names and backstories with one goal: Con desperate men

With the vast amount of filters and photo editing apps at users’ disposal, it can be hard to tell what’s real and fake on social media these days.

But a DailyMail.com probe has found a budding world of scantily-clad AI influencers who are conning desperate men out of money.

Earlier this week, a 19-year-old blonde bombshell known as Milla Sofia made headlines when it emerged she was artificially generated.

Since then, we’ve uncovered dozens of digital influencers on InstagramTikTok and Twitter who often have fake names and elaborate backstories, jobs and interests. AI-generated photos show them on fake vacations. 

These rising stars – who combined have hundreds of thousands of followers – are receiving admiration and cash from real men. 

Andrea is a brown-haired beauty with more than 30,000 followers on Twitter/X who comment on her lewd photos. She includes her PayPal account details in her bio and offers nude images for subscribers on Patreon.

Andrea’s Patreon offers plan options to chat with her, and for $300 a month, she will ‘basically be your online girlfriend.’ 

The human creators of these AI influencers are unknown faces on the web, only pushing out content to likely live a life they had only dreamed of. 

Milla Sofia is an aspiring fashion model with a portfolio of portraits showing her tanning in Bora Bora, taking in the views in Greece and working in a corporate office.

She has made headlines this week after Futurism uncovered her existence online. 

Some AI-generated personas, like Miquela Sousa, have been revealed to be marketing stunts that turned into gold mines.

Miquela starred in advertising campaigns for major brands such as Prada and Balenciaga, was interviewed by Vogue, and was named one of Time magazine’s 25 most influential people on the internet.

The forever 19-year-old is the brainchild of Trevor McFedries and Sara DeCou, founders of the fashion brand BRUD, who unleashed the AI teen in 2019.

At first, her creators liked to tease her followers and capitalize on the confusion. ‘Is she human?’ asked one perturbed Instagram user. ‘Why do you look like a doll?’ demanded another.

‘She’s some kind of cute mannequin,’ someone claimed. ‘It’s clearly a robot,’ stated another.

Eventually, fans were told the truth, or part of it at least. ‘I’m not a human being,’ Miquela confessed on her page. She said her ‘hands’ were shaking as she ‘wrote’ the post. 

Keep reading

ChatGPT’s Evil Twin “WormGPT” Is Silently Entering Emails And Raiding Banks

A malicious copy of OpenAI’s ChatGPT has been created by a bad actor and its aim is to take your money.

The evil AI is called WormGPT, and it was created by a hacker for sophisticated email phishing attacks.

Cybersecurity firm SlashNext confirmed the artificially intelligent language bot had been created purely for malicious purposes.

The firm explained in a report:

Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum that’s often associated with cybercrime.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

The cyber experts experimented with WormGPT to see just how dangerous it could be.

They asked it to create phishing emails and found the results disturbing.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” the experts wrote.

SlashNext says WormGPT is an example of the threat that language-generative AI models pose.

Experts think the tool could be damaging even in the hands of a novice cybercriminal.

With AI like this out there, it’s best to be extra vigilant when it comes to checking your email inbox.

That especially applies to any email that asks for money, banking details, or other personal information.

Keep reading

It Begins: France to Shut Down Internet in “Certain” Neighborhoods to Prevent Use of Social Media to Organize Violence

The Minister of the Interior in France announced on Sunday the country will restrict internet access in “certain” neighborhoods as the violence continues across the country.

According to the Ministry of the Interior, the restrictions are meant to prevent the use of social media and other platforms to organize violent activities.

Via Midnight Rider and Zoomer Waffen.

France is planning a shutdown of the nation’s internet in an attempt to stop the world from seeing what invaders are doing to the nation.

In just the span of a few days, France has devolved into a middle eastern nation engulfed in war and its despot limiting news from reaching the outside world.

Keep reading

Software Engineer Imprisoned for Developing Application to Break China’s Internet Censorship

Two people, who were detained by Shanghai State Security Police in October 2021 for developing software that circumvents the Great Firewall, received six- and five-year prison sentences on June 12, 2023.

He Binggang and his fiancée Zhang Yibo, together with several others, were arrested on Oct. 9, 2021, for developing and maintaining software that helps people living in China to access overseas internet platforms, according to the Falun Dafa Infocenter.

The Chinese regime set up Great Firewall (GFW), or Golden Shield Project, in 1998, which is managed by the regime’s Ministry of Public Security to monitor and censor what can and cannot be seen in China through an online network.

He and Zhang are Falun Gong adherents, a spiritual practice that has been persecuted inside China since 1999 and has been the subject of intense political propaganda.

Falun Gong, also known as Falun Dafa, is a spiritual improvement practice based on principles of truthfulness, compassion, and forbearance, with five slow-moving, gentle exercises that have significant physical benefits. The practice has been very popular in China, with an estimated 70 million to 100 million practitioners in the country before the Chinese communist regime began to persecute the belief and its followers in July 1999.

He and Zhang had developed software called oGate that allows Chinese people to freely access websites and information available outside of China but which are blocked by the GFW and otherwise unavailable inside China.

A group of Chinese non-IT activists, including lawyers and journalists, launched BanGFW on March 8, 2023.

Keep reading

Your Tax Dollars at Work: Military Monitors Social Media for Mean Posts About Generals

The U.S. Army’s Protective Services Battalion (PSB), the Department of Defense’s equivalent of the Secret Service, now monitors social media to see if anyone has posted negative comments about the country’s highest-ranking officers.

Per a report by the Intercept, the PSB’s remit includes protecting officers from “embarrassment,” in addition to more pressing threats like kidnapping and assassination.

An Army procurement document from 2022 obtained by the Intercept reveals that the PSB now monitors social media for “negative sentiment” about the officers under its protection, as well as for “direct, indirect, and veiled” threats.

“This is an ongoing PSIFO/PIB” — Protective Services Field Office/Protective Intelligence Branch — “requirement to provide global protective services for senior Department of Defense (DoD) officials, adequate security in order to mitigate online threats (direct, indirect, and veiled), the identification of fraudulent accounts and positive or negative sentiment relating specifically to our senior high-risk personnel.”

Per the report, the Army intends not just to monitor platforms for “negative sentiment,” but also to pinpoint the location of posters.

Keep reading

The Bizarre Reality of Getting Online in North Korea

FOR 25 MILLION North Koreans, the internet is an impossibility. Only a few thousand privileged members of the hermit kingdom’s society can access the global internet, while even the country’s heavily censored internal intranet is out of reach for the majority of the population. Getting access to free and open information isn’t an option.

New research from South Korea-based human rights organization People for Successful Corean Reunification (Pscore) details the reality for those who—in very limited circumstances—manage to get online in North Korea. The report reveals a days-long approval process to gain internet access, after which monitors sit next to people while they browse and approve their activities every five minutes. Even then, what can be accessed reveals little about the world outside North Korea’s borders.

Documentation from the NGO is being presented today at the human rights conference RightsCon and sheds light on the regime with the most limited internet freedoms, which fall far below the restrictive and surveilled internet access in China and Iran. For millions of people in North Korea, the internet simply doesn’t exist.

Keep reading

Atlantic Council Takes Up the Censorship Sword

In Costa Rica and Latvia today, the Atlantic Council is hosting its 360/OS Summit at RightsCon Costa Rica and NATO’s Riga StratCom. Among other things, the influential think tank will be previewing its “Task Force for a Trustworthy Future Web” report, which they hope will “lay the groundwork for stronger cross-sectoral ideation and action” and “facilitate collaboration now between the expanding community dedicated to understanding and protecting trust and safety.”

In human terms, conference attendees are discussing how best to stay on-brand by presenting the Censorship-Industrial Complex as a human rights initiative, and as #TwitterFiles documents show, they have the juice to pull it off.

EngageMedia (which I co-founded and was the long-time Executive Director) co-organized RightsCon in Manila in 2015, and I personally oversaw a lot of the preparations. That looks like a big mistake. I now believe RightsCon represents everything that has gone wrong in the digital rights field. Specifically, it represents the capture of a once-vibrant movement by corporate and government interests, and a broader shift towards anti-liberal and authoritarian solutions to online challenges. I left EngageMedia on good terms, but now have no formal relationship.

In honor of this week’s RightsCon and 360/OS Summit, we dug into the #TwitterFiles to revisit the integration of the Atlantic Council’s anti-disinformation arm, the Digital Forensic Research Labs (DFRLabs), while also highlighting its relationship with weapons manufacturers, Big Oil, Big Tech, and others who fund the NATO-aligned think tank.

The Atlantic Council is unique among “non-governmental” organizations thanks to its lavish support from governments and the energy, finance, and weapons sectors. It’s been a key player in the development of the “anti-disinformation” sector from the beginning. It wasn’t an accident when its DFRLabs was chosen in 2018 to help Facebook “monitor for misinformation and foreign interference,” after the platform came under intense congressional scrutiny as a supposed unwitting participant in a Russian influence campaign. The press uniformly described DFRLabs as an independent actor that would merely “improve security,” and it was left to media watchdog FAIR to point out that the Council was and is “dead center in what former President Obama’s deputy national security advisor Ben Rhodes called ‘the blob.’”

Keep reading

Childproofing the Internet

For the past several years, lawmakers and bureaucrats around the country have been trying to solve a problem. They wanted to regulate the internet, and in particular, they wanted to censor content and undermine a variety of systems that allow for privacy and anonymity online—the systems, in other words, that allow for online individuals to conduct themselves freely and outside of the purview of politicians.

There was something like a bipartisan agreement on the necessity of these rules and regulations. Lawmakers and regulators test-drove a number of potential arguments for online speech rules, including political biaspolitical extremismdrug crime, or the fact some tech companies are just really big. But it turned out to be quite difficult to drum up support for wonky causes like antitrust reform or amending the internet liability law Section 230, and even harder to make the case that the sheer size of companies like Amazon was really the problem.

Their efforts tended to falter because they lacked a consensus justification. Those in power knew what they wanted to do. They just didn’t know why, or how.

But in statehouses and in Congress today, that problem appears to have been solved. Politicians looking to censor online content and more tightly regulate digital life have found their reason: child safety.

Keep reading

Congress To Investigate WHO Plans To Use “Listening Surveillance Systems” To Identify “Misinformation”

If you’ve been following our reporting on the issue, you’ll already know that the new World Health Organization (WHO) pandemic prevention initiative, the Preparedness and Resilience for Emerging Threats (PRET), recommends using “social listening surveillance systems” to identify “misinformation.” But as more people are learning about how unelected bodies are being used to suppress speech and potentially override sovereignty, it’s starting to get more pushback.

According to documents from the UN agency, PRET aims to “guide countries in pandemic planning” and work to “incorporate the latest tools and approaches for shared learning and collective action established during the COVID-19 pandemic.”

The PRET document describes misinformation as a “health threat,” and refers to it as an “infodemic.”

“Infodemic is the overabundance of information – accurate or not – which makes it difficult for individuals to adopt behaviors that will protect their health and the health of their families and communities. The infodemic can directly impact health, hamper the implementation of public health countermeasures and undermine trust and social cohesiveness,” the document states.

However, it continues to recommend invasive methods of countering the spread of misinformation.

“Establish and invest in resources for social listening surveillance systems and capacities to identify concerns as well as rumors and misinformation,” the WHO wrote in the PRET document.

“To build trust, it’s important to be responsive to needs and concerns, to relay timely information, and to train leaders and HCWs in risk communications principles and encourage their application.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading