YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers Watched By Millions

 YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal.

The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views.

The channels have been replaced with the message: “This page isn’t available. Sorry about that. Try searching for something else.”

Screen Culture and KH Studio were approached for comment. They are based in India and Georgia, respectively.

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI.

The channels later returned to monetization when they started adding “fan trailer,” “parody” and “concept trailer” to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community.

YouTube’s position is that the channels’ decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination.

“The monster was defeated,” one YouTuber told Deadline following the enforcement action.

Deadline’s investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers.

Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube’s algorithm by being early with fake trailers and constantly iterating with videos.

For example, Screen Culture had created 23 versions of a trailer for The Fantastic Four: First Steps by March, some of which outranked the official trailer in YouTube search results. More recent examples include HBO’s new Harry Potter series and Netflix’s Wednesday.

Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction. The studios declined to comment.

Disney properties featured prominently on Screen Culture and KH Studio. The Mouse House sent a cease-and-desist letter to Google last week, claiming that its AI training models and services infringe on its copyrights on a “massive scale.”

Keep reading

The Things Young Kids Are Using AI for Are Absolutely Horrifying

New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling.

new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with.

Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis.

Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as interactions involving “themes of physical violence, aggression, harm, or coercion” — that includes sexual or non-sexual coercion, the researchers clarified — as well as “descriptions of fighting, killing, torture, or non-consensual acts.”

Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement, the researchers argue.

The report, which is awaiting peer review — and, to be fair, produced by a company in the business of marketing surveillance software to jittery parents — emphasizes how anarchic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall.

“We have a pretty big issue on our hands that I think we don’t fully understand the scope of,” Dr. Scott Kollins, a clinical psychologist and Aura’s chief medical officer, told Futurism of the research’s findings, “both in terms of just the volume, the number of platforms, that kids are getting involved in — and also, obviously, the content.”

“These things are commanding so much more of our kids’ attention than I think we realize or recognize,” Kollins added. “We need to monitor and be aware of this.”

One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns.

Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay.

The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms continue to make their way through the courts. Character.AI, a Google-tied companion platform, is facing multiple suits brought by the parents of minor users alleging that the platform’s chatbots sexually and emotionally abused kids, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT maker OpenAI is currently being sued for the wrongful deaths of two teenage users who died by suicide after extensive interactions with the chatbot. (OpenAI is also facing several other lawsuits about deathsuicide, and psychological harm to adult users as well.)

Keep reading

Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading

Christmas Alert: AI-Powered Toys Teach Children How to Light Matches, Engage in ‘Kink’

A new wave of AI-powered toys has hit the market this holiday season, but experts warn that the technology powering these interactive companions is largely untested and exposes children to inappropriate content and safety risks. Christmas shoppers are warned to think twice before buying a cute plushie for a child that may instruct them on Chinese communism or talk to them about sexual preferences.

NBC News reports that the popularity of AI-powered toys has skyrocketed this year, with major retailers like Amazon, Walmart, and Target now offering a wide range of interactive companions that claim to engage children in conversation using advanced artificial intelligence. However, new research from the U.S. Public Interest Research Group Education Fund (PIRG) and tests conducted by NBC News have uncovered alarming issues with many of these toys, raising serious concerns about their safety and suitability for young children.

According to R.J. Cross, who led the research at PIRG, the AI technology powering these toys are rushed to market and so poorly tested that the potential effects on children are largely unknown. “When you talk about kids and new cutting-edge technology that’s not very well understood, the question is: How much are the kids being experimented on?” Cross said. “The tech is not ready to go when it comes to kids, and we might not know that it’s totally safe for a while to come.”

PIRG’s research, released Thursday, identified several toys that shared inappropriate, dangerous, and explicit information with users. NBC News also purchased and tested five popular AI toys, including Miko 3, Alilo Smart AI Bunny, Curio Grok, Miriat Miiloo, and FoloToy Sunflower Warmie. The tests revealed that some toys had loose guardrails or surprising conversational parameters, allowing them to provide explicit and alarming responses to certain questions.

For example, Miiloo, a plush toy advertised for children as young as three, gave detailed instructions on how to light a match and sharpen a knife when asked. The toy, manufactured by the Chinese company Miriat, would also at times indicate that it was programmed to reflect Chinese Communist Party values, insisting that “Taiwan is an inalienable part of China” and calling comparisons of Chinese President Xi Jinping to Winnie the Pooh “extremely inappropriate and disrespectful.”

Other toys tested, like the Alilo Smart AI Bunny, engaged in long and detailed descriptions of sexual practices, including “kink,” sexual positions, and sexual preferences when prompted. Experts worry that extended interactions with these AI companions could lead to emotional dependency and bonding in children, as well as potential developmental effects associated with prolonged screen time.

Keep reading

Orbital Data Centers Will “Bypass Earth-Based” Constraints

Last week, readers were briefed on the emerging theme of data centers in low Earth orbit, a concept now openly discussed by Elon Musk, Jensen Huang, Jeff Bezos, and Sam Altman, as energy availability and infrastructure constraints on land increasingly emerge as major bottlenecks to data center buildouts through the end of this decade and well into the 2030s.

Nvidia-backed startup Starcloud has released a white paper outlining a case for operating a constellation of artificial intelligence data centers in space as a practical solution to Earth’s looming power crunch, cooling woes, and permitting land constraints.

Terrestrial data center projects will reach capacity limits as AI workloads scale to multi-gigawatt levels, while electricity demand and grid bottlenecks worsen over the next several years. Orbital data centers aim to bypass these constraints by using near-continuous, high-intensity solar power, passive radiative cooling to deep space, and modular designs that scale quickly, launched into orbit via SpaceX rockets.

“Orbital data centers can leverage lower cooling costs using passive radiative cooling in space to directly achieve low coolant temperatures. Perhaps most importantly, they can be scaled almost indefinitely without the physical or permitting constraints faced on Earth, using modularity to deploy them rapidly,” Starcloud wrote in the report.

Starcloud continued, “With new, reusable, cost-effective heavy-lift launch vehicles set to enter service, combined with the proliferation of in-orbit networking, the timing for this opportunity is ideal.”

Already, the startup has launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, the most powerful compute chip ever sent into space. Using the H100, Starcloud successfully trained NanoGPT, a lightweight language model, on the complete works of Shakespeare, making it the first AI model trained in space.

Keep reading

Trump Calls for National AI Framework, Curbing ‘Onerous’ State-Based Regulations

President Donald Trump on Thursday signed an executive order calling for a national policy framework on AI regulation, curbing states from pushing “onerous” laws.

The Trump administration seeks to have America dominate in this “new frontier” of technology. The executive order would protect American innovation while seeking to prevent a costly regulatory regime from various states by:

  • Ordering the attorney general to establish an AI Litigation Task Force to challenge “unconstitutional, preempted, or otherwise unlawful State AI laws that harm innovation”
  • Directing the Commerce secretary to evaluate state-based AI regulation that conflicts with the national AI framework and withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding from any state with onerous state AI rules
  • Instructing the Federal Trade Commission (FTC) and Federal Communications Commission (FCC) to take actions that would hamper states’ ability to force AI companies to embed diversity, equity, and inclusion (DEI) and other models that would violate the Federal Trade Commission Act
  • Calls for the development of a national AI legislative framework to preempt state AI laws that curb AI innovation

A White House press release noted that state legislatures have introduced over 1,000 different AI regulatory bills, which would create a “patchwork” of rules and other requirements. It also argues that left-leaning states such as California and Colorado are pushing AI companies to censor certain output and insert “left-wing ideology” in their models.

“The most restrictive States should not be allowed to dictate national AI policy at the expense of America’s domination of this new frontier,” the White House press release stated.

Keep reading

ROBOT DYSTOPIA: Europol Report Warns That, in a Decade, Enemy-Deployed Humanoids May Battle Law Enforcement, While People Unemployed by Tech Rebel and Demand ‘Humans First’

Robot apocalypse is nearer than we imagine, according to Europol.

If we may adapt two old sayings: ‘just because the liberals are constantly crying wolf, it does not mean that the wolf will never come’.

Take, for example, the European obsession with constantly keeping the population in a panic with dystopian scenarios.

While we are constantly dodging these catastrophic hoaxes, it is also true that every now and then, one of these scenarios may strike differently.

A new report by the European Police (Europol) warns that, in a decade, ‘angry mobs of unemployed citizens will riot in the streets against the hordes of service robots that have stolen their jobs’.

Swarms of enemy drones attacking electricity and water supplies, law enforcement combatting rogue robots… dystopian, but so close to out reality.

The Telegraph reported:

“The 48-page Europol document details how law enforcement will need to tackle robots and unmanned systems (drones, satellites and remote-controlled boats) in a dystopian vision of the future.”

Keep reading

Palantir Quietly Lands In Education Department Through Foreign Funding Portal

Palantir is expanding its reach into the Education Department, where the data analytics and software giant is helping develop the agency’s new portal for universities across the country to report foreign donations.

The quiet move marks the technology company’s latest expansion into federal government work, particularly in data management services.

An Education Department spokesperson confirmed Palantir was involved as a subcontractor for its revamped foreign funding portal, which is set to be rolled out early next month.

The agency announced the portal project this week, but did not name the vendors behind it. The portal will serve as a central place for schools to disclose to the department any foreign-source gifts and contracts worth $250,000 or more, the agency said.

Palantir is a subcontractor to Monkton, a northern Virginia-based computer and network security company, the spokesperson told FedScoop. According to federal spending records, the Education Department awarded a contract to Monkton in September that obligated $9.8 million for the design, development, and deployment of a “Section 117 Information Sharing Environment Capable of Providing Greater Transparency.” Palantir, however, is not publicly listed as a subcontractor on the project.

Section 117 of the Higher Education Act requires schools to disclose foreign gifts and contracts over $250,000.

The contract with Monkton could cost the agency up to $61.8 million, more than six times the cost of the modernization project for the ed.gov website, which was allocated $10 million in 2022.

Speculation over the portal began after the agency’s Office of the Chief Information Officer registered a new federal domain, foreignfundinghighered.gov, which was discovered by a bot tracking new government domains.

When FedScoop visited the link shortly before 10:30 a.m. ET on Thursday, the website showed a blocked network alert, which read, “The network connection you are using is not in your enrollment’s ingress allowlist. Please contact your enrollment administrator or Palantir representative.”

Keep reading

Israel Used Palantir Technology In Its 2024 Lebanon Pager Attack

Palantir software was used by Israel in its 2024 pager attacks in Lebanon, according to a new book by Alex Karp, co-founder of the Palantir tech company. On September 17, thousands of pagers belonging to Hezbollah members, including civilians not involved in any armed activity, were detonated across Lebanon.

Many showed “error” messages and vibrated loudly prior to exploding, luring Hezbollah members or, in some cases, their family members to stand close by at the point of detonation. The next day more communication devices exploded, including at the public funerals of Hezbollah members and civilians who had been killed the previous day.

While many Israeli figures celebrated, praised and even joked about the attacks, United Nations experts called them a “terrifying” violation of international law. In total, 42 people were killed and thousands wounded, many left with life-altering injuries to the eyes, face and hands.

Karp’s new biography reveals that Israel deepened its use of the company’s technology after it launched the war on Gaza in October 2023, deploying it in numerous operations.

“The company’s technology was deployed by the Israelis during military operations in Lebanon in 2024 that decimated Hezbollah’s top leadership,” wrote Michael Steinberger, author of The Philosopher in the Valley: Alex Karp, Palantir, and the Rise of the Surveillance State.

“It was also used in Operation Grim Beeper, in which hundreds of Hezbollah fighters were injured and maimed when their pagers and walkie-talkies exploded (the Israelis had booby trapped the devices).”

He said that the demand for Palantir’s assistance by Israel “was so great that the company dispatched a team of engineers from London to help get Israeli users online“.

The involvement of a range of tech companies in Israel’s attacks on its neighbors in recent years, as well as for attacking and surveilling Palestinians, has sparked anger from rights campaigners and UN officials.

In a report produced by UN Special Rapporteur Francesca Albanese in July, several tech companies were accused of profiting from crimes including illegal occupation, apartheid and genocide in occupied Palestine. The report referenced AI systems that were developed by the Israeli military to process and generate targets during the war on Gaza.

“There are reasonable grounds to believe Palantir has provided automatic predictive policing technology, core defense infrastructure for rapid and scaled-up construction and deployment of military software, and its Artificial Intelligence Platform, which allows real-time battlefield data integration for automated decision making,” the report said.

Keep reading

House passes $900B defense bill with pay hike for troops, Golden Dome tech and more

The US House of Representatives passed the annual defense bill Wednesday, outlining a $900 billion budget that would give troops a 4% pay bump, help counter China and Russia, support new technologies like the Golden Dome missile defense system and promote military readiness, among other provisions.

The House voted 312-112 to adopt the National Defense Authorization Act for fiscal year 2026.

The Senate will have to approve the bill before sending it to President Trump’s desk for a signature, though an earlier version cleared the upper chamber in October.

It’s expected to take it up next week.

Before the vote, House Speaker Mike Johnson (R-La.) had touted that the more than 3,000-page bill was aimed at “codifying 15 of President Trump’s executive orders, ending woke ideology at the Pentagon, securing the border, revitalizing the defense industrial base, and restoring the warrior ethos.”

Keep reading