‘Revenge porn’ betrayal of thousands of women: 40,000 cases identified but only three per cent of suspects charged as new AI apps help abusers humiliate victims

Women are being ‘failed’ by revenge porn laws because most perpetrators are avoiding justice, official data suggests.

Just 3 per cent of suspects were charged in 40,000 cases probed by police over the past five years.

The problem – in which intimate private photos or video footage are shared without consent by a former partner – is likely to grow, as experts warn that AI apps capable of generating lifelike fake sexual images will make it easier for abusers to humiliate victims.

Thousands of distressing cases were left unsolved or closed over the five-year period due to a lack of crucial evidence, the Home Office statistics revealed.

The data showed that 40,110 offences were reported to the police – roughly one every hour. Yet just 3.2 per cent of cases ended with a criminal charge.

Shadow Home Secretary Chris Philp said last night: ‘The statistics are absolutely staggering. We must do more to ensure perpetrators are held accountable and victims are properly supported so the system doesn’t fail women.

‘We were unapologetic in government about standing up for women and protecting victims of domestic abuse. 

‘We toughened up sentences for rapists and stalkers, outlawed upskirting and revenge porn, and made violence against women and girls a national policing priority – but it is clear more needs to be done.’

Love Island star Georgia Harrison took her ex-boyfriend to court for uploading online sexual footage of her in 2023.

Ms Harrison, 31, gave evidence against Stephen Bear, 35, at Chelmsford Crown Court, where he was jailed for 21 months for voyeurism and two counts of disclosing private sexual images without consent.

Experts fear the number of cases could soar as culprits use so-called ‘nudification’ apps, which can edit an ordinary photograph of a person to make it appear that they are naked. 

Other powerful AI software can be used to insert a victim’s face into sexually explicit pictures or videos – known as deep-fakes – such as the high-profile clips of pop star Taylor Swift that caused outrage last year.

Keep reading

China Deploys Humanoid AI Robots to Guard Border

China is dispatching a new kind of recruit to its bustling border with Vietnam: humanoid robots. The UBTECH Walker S2 machines will patrol the Fangchenggang crossing in a high-stakes trial of AI technology.

The above video from UBTECH shows hundreds of these bipedal bots marching in formation, “staring” out at the world with two eye-sized cameras mounted over digital displays. Sophisticated sensors and software help them balance and navigate crowded spaces, and the droids also autonomously swap their own batteries to minimize downtime. At the end of the video, the bots file into multiple Chinese shipping containers and give a salute.

Border officials claim the robots’ roles will be diverse, from guiding passenger lines to checking cargo IDs and seals. The deployment is part of China’s national strategy to lead the global robotics race; footage released earlier this month shows a different model of android soldier deployed near the country’s border with India.

Keep reading

Bill Gates Thinks Anything He Disagrees with Is ‘Misinformation’ that Should Be Policed by AI

Billionaire tech tycoon Bill Gates has expressed concern about the impact of “misinformation” on future generations, claiming that AI should be used to police ideas he disagrees with online. The creepy Microsoft founder has a long history of taking radical positions on issues including climate alarmism that he’d like to stifle dissent against.

Fortune reports that in a recent interview, Bill Gates discussed what he sees as the growing problem of misinformation and its potential to negatively affect the lives of his children and future generations. Gates, who reinvented himself as a creepy advocate for climate alarmism and other leftist causes,, noted that the spread of false information online has become a significant concern for him and his family.

The billionaire’s comments came in light of a recent incident involving his daughter, Phoebe Gates, who claims she was subjected to online harassment. This experience led Gates to reflect on the broader implications of misinformation and its ability to cause harm to individuals and society as a whole.

“Seeing my daughter targeted by false claims and harassment online was a wake-up call,” Gates said. “It made me realize that the problem of misinformation is not just about the present, but it’s also about the future we’re leaving for our children.”

Gates argues that the spread of what he considers to be “misinformation” can have far-reaching consequences, from undermining public trust in institutions to hindering progress on critical issues such as public health and climate change. He noted that the proliferation of false information online has the potential to erode the foundations of democracy and create a more polarized and divided society.

“Misinformation is a complex problem that requires a multi-faceted approach,” Gates explained. “We need to invest in media literacy programs, support fact-checking organizations, and encourage responsible behavior from social media platforms. But we also need to foster a culture of critical thinking and healthy skepticism, so that people are better equipped to distinguish between credible information and false claims.”

Keep reading

UK Police Pilot AI System to Track “Suspicious” Driver Journeys

Police forces across Britain are experimenting with artificial intelligence that can automatically monitor and categorize drivers’ movements using the country’s extensive number plate recognition network.

Internal records obtained by Liberty Investigates and The Telegraph reveal that three of England and Wales’s nine regional organized crime units are piloting a Faculty AI-built program designed to learn from vehicle movement data and detect journeys that algorithms label “suspicious.”

For years, the automatic number plate recognition (ANPR) system has logged more than 100 million vehicle sightings each day, mostly for confirming whether a specific registration has appeared in a certain area.

The new initiative changes that logic entirely. Instead of checking isolated plates, it teaches software to trace entire routes, looking for patterns of behavior that resemble the travel of criminal networks known for “county lines” drug trafficking.

The project, called Operation Ignition, represents a change in scale and ambition.

Unlike traditional alerts that depend on officers manually flagging “vehicles of interest,” the machine learning model learns from past data to generate its own list of potential targets.

Official papers admit that the process could involve “millions of [vehicle registrations],” and that the information gathered may guide future decisions about the ethical and operational use of such technologies.

What began as a Home Office-funded trial in the North West covering Merseyside, Greater Manchester, Cheshire, Cumbria, Lancashire, and North Wales has now expanded into three regional crime units.

Keep reading

Bernie Sanders Wants To Pause New Data Centers To Stop the Economy From Growing Too Much

The United States is leading a global data center boom. Investors are plowing some $7 trillion into the infrastructure necessary to support AI development, with 40 percent of that investment happening here in the United States.

This boom in data center investment is so pronounced that many analysts argue it’s propping up an economy that’d otherwise be wobbling under the strain of tariffs and high borrowing costs.

Some skeptics credibly argue that the money flowing into AI research and the physical infrastructure needed to support it is a bubble that will eventually pop.

Unconvinced by the skeptics is Sen. Bernie Sanders (I–Vt.), who seems to believe that data center investment will generate large profits, produce technological innovations, and drive economy-wide productivity growth.

Therefore, he wants to shut it down.

In a video posted to Instagram, the socialist senator called for a federal moratorium on data center construction until our politicians can figure out just what the hell is going on.

According to Sanders, the development of artificial intelligence and robotics technologies powered by data centers “is moving very, very quickly, and we need to slow it down.”

He warns that the current boom, if left unchecked, could well end up enriching already wealthy billionaires investing in the technology, leading to job automation and powering a distracting and alienating technology.

A “moratorium will give democracy a chance to catch up with the transformative changes that we are witnessing and make sure the benefits of these technologies work for all of us,” Sanders concludes.

Given general bipartisan support for “winning the AI race” and the amount of growth being generated by data center investment, it’s unlikely that any such moratorium will come to pass.

The fact Sanders is proposing it anyway is reflects just how much anxiety he and other members of the socialist left feel whenever capitalism is working.

Whether it’s driverless cars or choices in deodorant brands, Sanders cannot stop worrying and learn to love it when capitalists make productive investments and give consumers what they want.

Any economic growth that is not planned by the bureaucrats and approved by the electorate is inherently suspicious and perhaps downright malicious.

Sanders’ call for a data center moratorium is to prevent investment in this infrastructure from yielding productive fruit.

He’s worried that investors will reap profits from data center construction. Those same profits would be a signal that their investments were a prudent use of capital that’s driving real growth in the economy.

Likewise, the job automation Sanders worries about would be another sign that data center investments were well-placed. A primary purpose of capital investment and technological innovation is to shift more labor off the backs of human beings and onto machines.

Keep reading

Senators press toy makers on AI-powered toys exposing children to inappropriate content

Many of these toys are… exposing children to inappropriate content, privacy risks, and manipulative engagement tactics.”

These chatbots have encouraged kids to commit self harm and suicide, and now your company is pushing them on the youngest children.

US Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) sent a letter to the Chief Executive Officers of Little Learners Toys, Mattel, Miko, Curio Interactive, FoloToy, and Keyi Robot sounding the alarm on the use of artificial intelligence (AI) in their toys. The Senators demanded answers on what safeguards these companies have in place to protect kids from sexually explicit, violent, or otherwise inappropriate content for children.

AI Chatbots in Toys Pose Serious Risk to Children’s Healthy Development

“We write today to express our concern with the sale of toys powered by artificial intelligence (AI). These AI toys—specifically those powered by chatbots imbedded in everyday children’s toys like plushies, dolls, and other beloved toys—pose risks to children’s healthy development.

While AI has incredible potential to benefit children with learning and accessibility, experts have raised concerns about AI toys and the lack of research that has been conducted to understand the full effect of these products on our kids.

Many of these toys are not offering interactive play, but instead are exposing children to inappropriate content, privacy risks, and manipulative engagement tactics. These aren’t theoretical worst-case scenarios; they are documented failures uncovered through real-world testing, and they must be addressed.”

Keep reading

YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers Watched By Millions

 YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal.

The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views.

The channels have been replaced with the message: “This page isn’t available. Sorry about that. Try searching for something else.”

Screen Culture and KH Studio were approached for comment. They are based in India and Georgia, respectively.

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI.

The channels later returned to monetization when they started adding “fan trailer,” “parody” and “concept trailer” to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community.

YouTube’s position is that the channels’ decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination.

“The monster was defeated,” one YouTuber told Deadline following the enforcement action.

Deadline’s investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers.

Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube’s algorithm by being early with fake trailers and constantly iterating with videos.

For example, Screen Culture had created 23 versions of a trailer for The Fantastic Four: First Steps by March, some of which outranked the official trailer in YouTube search results. More recent examples include HBO’s new Harry Potter series and Netflix’s Wednesday.

Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction. The studios declined to comment.

Disney properties featured prominently on Screen Culture and KH Studio. The Mouse House sent a cease-and-desist letter to Google last week, claiming that its AI training models and services infringe on its copyrights on a “massive scale.”

Keep reading

The Things Young Kids Are Using AI for Are Absolutely Horrifying

New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling.

new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with.

Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis.

Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as interactions involving “themes of physical violence, aggression, harm, or coercion” — that includes sexual or non-sexual coercion, the researchers clarified — as well as “descriptions of fighting, killing, torture, or non-consensual acts.”

Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement, the researchers argue.

The report, which is awaiting peer review — and, to be fair, produced by a company in the business of marketing surveillance software to jittery parents — emphasizes how anarchic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall.

“We have a pretty big issue on our hands that I think we don’t fully understand the scope of,” Dr. Scott Kollins, a clinical psychologist and Aura’s chief medical officer, told Futurism of the research’s findings, “both in terms of just the volume, the number of platforms, that kids are getting involved in — and also, obviously, the content.”

“These things are commanding so much more of our kids’ attention than I think we realize or recognize,” Kollins added. “We need to monitor and be aware of this.”

One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns.

Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay.

The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms continue to make their way through the courts. Character.AI, a Google-tied companion platform, is facing multiple suits brought by the parents of minor users alleging that the platform’s chatbots sexually and emotionally abused kids, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT maker OpenAI is currently being sued for the wrongful deaths of two teenage users who died by suicide after extensive interactions with the chatbot. (OpenAI is also facing several other lawsuits about deathsuicide, and psychological harm to adult users as well.)

Keep reading

10 Major Laws Taking Effect In California In 2026

The new year is right around the corner, which means a new batch of laws will soon take effect.

From banning masks for law enforcement officers and requiring gender-neutral restrooms in schools, to enhancing artificial intelligence regulations and completely banning plastic bags in stores, here is an overview of some major laws Californians can expect next year or late this year.

Law Enforcement Masks

Senate Bill 627 will ban law enforcement officers at the local and federal levels from wearing a face mask when operating in the Golden State.

It also requires agencies to create policies limiting the use of facial coverings. According to the bill, face coverings excluded from this ban include clear face shields that don’t obscure the person’s facial identity, medical masks, motorcycle helmets, or masks necessary for underwater use.

The federal government had sued the state over this new rule, saying it threatens the safety of officers who could be harassed if their identities are known. Attorney General Pamela Bondi said in a Nov. 17 statement that “California’s anti-law enforcement policies discriminate against the federal government and are designed to create risk for our agents.”

The mask ban is slated to take effect on July 1, 2026.

School Policies

Senate Bill 760 will require schools to provide at least one all-gender restroom available during school hours and school functions.

The bill allows schools to convert their existing restrooms to comply. The state will reimburse local agencies and school districts for the costs.

The new bathroom policy applies to both public and charter schools and will take effect on July 1, 2026.

Assembly Bill 495 will broaden who can approve school-related medical procedures.

Distant relatives and temporary legal guardians designated by a parent in a family court will be allowed to sign a child out of school and authorize medical care.

Supporters have said the move protects families that have been divided by deportation due to illegal immigration. But opponents said it could lead to kidnapping and child trafficking if someone other than the parents has authority over a student.

Taking effect Jan. 1, 2026, the law will also prohibit daycare providers from asking for or keeping immigration-related information about students or their parents.

AI Regulations

Senate Bill 243 will make California the first state to require safety regulations specifically targeting companion chatbots.

Chatbots are described by the Federal Trade Commission as artificial intelligence (AI) technology that can “effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”

The new law requires a technology operator to make it clear and obvious to users that the chatbot is not a real human.

It also requires the operator to maintain a protocol for preventing the chatbot from producing content involving suicide or self-harm for the user. Details of the protocol need to be published on the operator’s website to comply with the new law.

The new chatbot regulations will take effect on Jan. 1, 2026, and chatbot operators will be required to submit annual reports on suicide-prevention protocols beginning on July 1, 2027.

Senate Bill 53 creates new regulations for frontier AI models, which include OpenAI’s GPT-4 and -5, Google’s Gemini, and xAI’s Grok.

Frontier AI models are defined as “a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations,” according to the bill.

Under the new law, large developers will have to publish their “frontier AI framework” explaining risk management practices, mitigation strategies, and evaluations by a third party. They will also be required to release transparency reports detailing risk assessments prior to introducing updated AI models. Non-compliance would result in up to $1 million in fines.

The new regulations will take effect on Jan. 1, 2026.

Keep reading

Christmas Alert: AI-Powered Toys Teach Children How to Light Matches, Engage in ‘Kink’

A new wave of AI-powered toys has hit the market this holiday season, but experts warn that the technology powering these interactive companions is largely untested and exposes children to inappropriate content and safety risks. Christmas shoppers are warned to think twice before buying a cute plushie for a child that may instruct them on Chinese communism or talk to them about sexual preferences.

NBC News reports that the popularity of AI-powered toys has skyrocketed this year, with major retailers like Amazon, Walmart, and Target now offering a wide range of interactive companions that claim to engage children in conversation using advanced artificial intelligence. However, new research from the U.S. Public Interest Research Group Education Fund (PIRG) and tests conducted by NBC News have uncovered alarming issues with many of these toys, raising serious concerns about their safety and suitability for young children.

According to R.J. Cross, who led the research at PIRG, the AI technology powering these toys are rushed to market and so poorly tested that the potential effects on children are largely unknown. “When you talk about kids and new cutting-edge technology that’s not very well understood, the question is: How much are the kids being experimented on?” Cross said. “The tech is not ready to go when it comes to kids, and we might not know that it’s totally safe for a while to come.”

PIRG’s research, released Thursday, identified several toys that shared inappropriate, dangerous, and explicit information with users. NBC News also purchased and tested five popular AI toys, including Miko 3, Alilo Smart AI Bunny, Curio Grok, Miriat Miiloo, and FoloToy Sunflower Warmie. The tests revealed that some toys had loose guardrails or surprising conversational parameters, allowing them to provide explicit and alarming responses to certain questions.

For example, Miiloo, a plush toy advertised for children as young as three, gave detailed instructions on how to light a match and sharpen a knife when asked. The toy, manufactured by the Chinese company Miriat, would also at times indicate that it was programmed to reflect Chinese Communist Party values, insisting that “Taiwan is an inalienable part of China” and calling comparisons of Chinese President Xi Jinping to Winnie the Pooh “extremely inappropriate and disrespectful.”

Other toys tested, like the Alilo Smart AI Bunny, engaged in long and detailed descriptions of sexual practices, including “kink,” sexual positions, and sexual preferences when prompted. Experts worry that extended interactions with these AI companions could lead to emotional dependency and bonding in children, as well as potential developmental effects associated with prolonged screen time.

Keep reading