YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers Watched By Millions

 YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal.

The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views.

The channels have been replaced with the message: “This page isn’t available. Sorry about that. Try searching for something else.”

Screen Culture and KH Studio were approached for comment. They are based in India and Georgia, respectively.

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI.

The channels later returned to monetization when they started adding “fan trailer,” “parody” and “concept trailer” to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community.

YouTube’s position is that the channels’ decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination.

“The monster was defeated,” one YouTuber told Deadline following the enforcement action.

Deadline’s investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers.

Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube’s algorithm by being early with fake trailers and constantly iterating with videos.

For example, Screen Culture had created 23 versions of a trailer for The Fantastic Four: First Steps by March, some of which outranked the official trailer in YouTube search results. More recent examples include HBO’s new Harry Potter series and Netflix’s Wednesday.

Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction. The studios declined to comment.

Disney properties featured prominently on Screen Culture and KH Studio. The Mouse House sent a cease-and-desist letter to Google last week, claiming that its AI training models and services infringe on its copyrights on a “massive scale.”

Keep reading

The Things Young Kids Are Using AI for Are Absolutely Horrifying

New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling.

new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with.

Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis.

Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as interactions involving “themes of physical violence, aggression, harm, or coercion” — that includes sexual or non-sexual coercion, the researchers clarified — as well as “descriptions of fighting, killing, torture, or non-consensual acts.”

Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement, the researchers argue.

The report, which is awaiting peer review — and, to be fair, produced by a company in the business of marketing surveillance software to jittery parents — emphasizes how anarchic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall.

“We have a pretty big issue on our hands that I think we don’t fully understand the scope of,” Dr. Scott Kollins, a clinical psychologist and Aura’s chief medical officer, told Futurism of the research’s findings, “both in terms of just the volume, the number of platforms, that kids are getting involved in — and also, obviously, the content.”

“These things are commanding so much more of our kids’ attention than I think we realize or recognize,” Kollins added. “We need to monitor and be aware of this.”

One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns.

Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay.

The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms continue to make their way through the courts. Character.AI, a Google-tied companion platform, is facing multiple suits brought by the parents of minor users alleging that the platform’s chatbots sexually and emotionally abused kids, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT maker OpenAI is currently being sued for the wrongful deaths of two teenage users who died by suicide after extensive interactions with the chatbot. (OpenAI is also facing several other lawsuits about deathsuicide, and psychological harm to adult users as well.)

Keep reading

10 Major Laws Taking Effect In California In 2026

The new year is right around the corner, which means a new batch of laws will soon take effect.

From banning masks for law enforcement officers and requiring gender-neutral restrooms in schools, to enhancing artificial intelligence regulations and completely banning plastic bags in stores, here is an overview of some major laws Californians can expect next year or late this year.

Law Enforcement Masks

Senate Bill 627 will ban law enforcement officers at the local and federal levels from wearing a face mask when operating in the Golden State.

It also requires agencies to create policies limiting the use of facial coverings. According to the bill, face coverings excluded from this ban include clear face shields that don’t obscure the person’s facial identity, medical masks, motorcycle helmets, or masks necessary for underwater use.

The federal government had sued the state over this new rule, saying it threatens the safety of officers who could be harassed if their identities are known. Attorney General Pamela Bondi said in a Nov. 17 statement that “California’s anti-law enforcement policies discriminate against the federal government and are designed to create risk for our agents.”

The mask ban is slated to take effect on July 1, 2026.

School Policies

Senate Bill 760 will require schools to provide at least one all-gender restroom available during school hours and school functions.

The bill allows schools to convert their existing restrooms to comply. The state will reimburse local agencies and school districts for the costs.

The new bathroom policy applies to both public and charter schools and will take effect on July 1, 2026.

Assembly Bill 495 will broaden who can approve school-related medical procedures.

Distant relatives and temporary legal guardians designated by a parent in a family court will be allowed to sign a child out of school and authorize medical care.

Supporters have said the move protects families that have been divided by deportation due to illegal immigration. But opponents said it could lead to kidnapping and child trafficking if someone other than the parents has authority over a student.

Taking effect Jan. 1, 2026, the law will also prohibit daycare providers from asking for or keeping immigration-related information about students or their parents.

AI Regulations

Senate Bill 243 will make California the first state to require safety regulations specifically targeting companion chatbots.

Chatbots are described by the Federal Trade Commission as artificial intelligence (AI) technology that can “effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”

The new law requires a technology operator to make it clear and obvious to users that the chatbot is not a real human.

It also requires the operator to maintain a protocol for preventing the chatbot from producing content involving suicide or self-harm for the user. Details of the protocol need to be published on the operator’s website to comply with the new law.

The new chatbot regulations will take effect on Jan. 1, 2026, and chatbot operators will be required to submit annual reports on suicide-prevention protocols beginning on July 1, 2027.

Senate Bill 53 creates new regulations for frontier AI models, which include OpenAI’s GPT-4 and -5, Google’s Gemini, and xAI’s Grok.

Frontier AI models are defined as “a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations,” according to the bill.

Under the new law, large developers will have to publish their “frontier AI framework” explaining risk management practices, mitigation strategies, and evaluations by a third party. They will also be required to release transparency reports detailing risk assessments prior to introducing updated AI models. Non-compliance would result in up to $1 million in fines.

The new regulations will take effect on Jan. 1, 2026.

Keep reading

Christmas Alert: AI-Powered Toys Teach Children How to Light Matches, Engage in ‘Kink’

A new wave of AI-powered toys has hit the market this holiday season, but experts warn that the technology powering these interactive companions is largely untested and exposes children to inappropriate content and safety risks. Christmas shoppers are warned to think twice before buying a cute plushie for a child that may instruct them on Chinese communism or talk to them about sexual preferences.

NBC News reports that the popularity of AI-powered toys has skyrocketed this year, with major retailers like Amazon, Walmart, and Target now offering a wide range of interactive companions that claim to engage children in conversation using advanced artificial intelligence. However, new research from the U.S. Public Interest Research Group Education Fund (PIRG) and tests conducted by NBC News have uncovered alarming issues with many of these toys, raising serious concerns about their safety and suitability for young children.

According to R.J. Cross, who led the research at PIRG, the AI technology powering these toys are rushed to market and so poorly tested that the potential effects on children are largely unknown. “When you talk about kids and new cutting-edge technology that’s not very well understood, the question is: How much are the kids being experimented on?” Cross said. “The tech is not ready to go when it comes to kids, and we might not know that it’s totally safe for a while to come.”

PIRG’s research, released Thursday, identified several toys that shared inappropriate, dangerous, and explicit information with users. NBC News also purchased and tested five popular AI toys, including Miko 3, Alilo Smart AI Bunny, Curio Grok, Miriat Miiloo, and FoloToy Sunflower Warmie. The tests revealed that some toys had loose guardrails or surprising conversational parameters, allowing them to provide explicit and alarming responses to certain questions.

For example, Miiloo, a plush toy advertised for children as young as three, gave detailed instructions on how to light a match and sharpen a knife when asked. The toy, manufactured by the Chinese company Miriat, would also at times indicate that it was programmed to reflect Chinese Communist Party values, insisting that “Taiwan is an inalienable part of China” and calling comparisons of Chinese President Xi Jinping to Winnie the Pooh “extremely inappropriate and disrespectful.”

Other toys tested, like the Alilo Smart AI Bunny, engaged in long and detailed descriptions of sexual practices, including “kink,” sexual positions, and sexual preferences when prompted. Experts worry that extended interactions with these AI companions could lead to emotional dependency and bonding in children, as well as potential developmental effects associated with prolonged screen time.

Keep reading

Orbital Data Centers Will “Bypass Earth-Based” Constraints

Last week, readers were briefed on the emerging theme of data centers in low Earth orbit, a concept now openly discussed by Elon Musk, Jensen Huang, Jeff Bezos, and Sam Altman, as energy availability and infrastructure constraints on land increasingly emerge as major bottlenecks to data center buildouts through the end of this decade and well into the 2030s.

Nvidia-backed startup Starcloud has released a white paper outlining a case for operating a constellation of artificial intelligence data centers in space as a practical solution to Earth’s looming power crunch, cooling woes, and permitting land constraints.

Terrestrial data center projects will reach capacity limits as AI workloads scale to multi-gigawatt levels, while electricity demand and grid bottlenecks worsen over the next several years. Orbital data centers aim to bypass these constraints by using near-continuous, high-intensity solar power, passive radiative cooling to deep space, and modular designs that scale quickly, launched into orbit via SpaceX rockets.

“Orbital data centers can leverage lower cooling costs using passive radiative cooling in space to directly achieve low coolant temperatures. Perhaps most importantly, they can be scaled almost indefinitely without the physical or permitting constraints faced on Earth, using modularity to deploy them rapidly,” Starcloud wrote in the report.

Starcloud continued, “With new, reusable, cost-effective heavy-lift launch vehicles set to enter service, combined with the proliferation of in-orbit networking, the timing for this opportunity is ideal.”

Already, the startup has launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, the most powerful compute chip ever sent into space. Using the H100, Starcloud successfully trained NanoGPT, a lightweight language model, on the complete works of Shakespeare, making it the first AI model trained in space.

Keep reading

Europol Pinpoints When Skynet-Like Human Resistance To AI Could Emerge

If Goldman’s estimates of a partial or full displacement of up to 300 million jobs across the Western world due to the proliferation of artificial intelligence and automation are even remotely correct, a new report suggests that by 2035, society could face widespread public resentment, protests, and even acts of sabotage directed at robotic systems.

A new report by Europol, the EU’s central intelligence and coordination hub for serious crime and terrorism, identifies around 2035 as a potential inflection point at which a human resistance movement against AI could begin to take shape, in a scenario that echoes the resistance to Skynet in the Terminator film franchise.

Europol warned of “bot-bashing” incidents and acts of sabotage against robotic systems in the middle of the next decade, as the spread of AI and robotics could fuel a populist backlash against technologies that have hollowed out parts of the Western economy and left millions unemployed.

Keep reading

Trump Calls for National AI Framework, Curbing ‘Onerous’ State-Based Regulations

President Donald Trump on Thursday signed an executive order calling for a national policy framework on AI regulation, curbing states from pushing “onerous” laws.

The Trump administration seeks to have America dominate in this “new frontier” of technology. The executive order would protect American innovation while seeking to prevent a costly regulatory regime from various states by:

  • Ordering the attorney general to establish an AI Litigation Task Force to challenge “unconstitutional, preempted, or otherwise unlawful State AI laws that harm innovation”
  • Directing the Commerce secretary to evaluate state-based AI regulation that conflicts with the national AI framework and withhold non-deployment Broadband Equity Access and Deployment (BEAD) funding from any state with onerous state AI rules
  • Instructing the Federal Trade Commission (FTC) and Federal Communications Commission (FCC) to take actions that would hamper states’ ability to force AI companies to embed diversity, equity, and inclusion (DEI) and other models that would violate the Federal Trade Commission Act
  • Calls for the development of a national AI legislative framework to preempt state AI laws that curb AI innovation

A White House press release noted that state legislatures have introduced over 1,000 different AI regulatory bills, which would create a “patchwork” of rules and other requirements. It also argues that left-leaning states such as California and Colorado are pushing AI companies to censor certain output and insert “left-wing ideology” in their models.

“The most restrictive States should not be allowed to dictate national AI policy at the expense of America’s domination of this new frontier,” the White House press release stated.

Keep reading

ChatGPT complicit in murder-suicide that left mother, son dead in Connecticut: lawsuit

ChatGPT has been accused of being complicit in murder for the first time and causing the death of a Connecticut mother after she was killed by her son after the AI bot told him delusions, according to a lawsuit that was filed on Thursday.

The lawsuit was filed by Suzanne Eberson Adams’ estate in California and has accused OpenAI, the company behind ChatGPT, as well as founder Sam Altman of wrongful death in the murder-suicide that led to the deaths of Adams as well as her son, Stein-Erik Soelberg. The killing took place inside their home in Greenwich, Connecticut.

“This isn’t ‘Terminator’ — no robot grabbed a gun. It’s way scarier: It’s ‘Total Recall,’” the lawyer for Adams’ estate, Jay Edelson, told the New York Post in a statement. “ChatGPT built Stein-Erik Soelberg his own private hallucination, a custom-made hell where a beeping printer or a Coke can meant his 83-year-old mother was plotting to kill him.”

The family said in a statement, “Unlike the movie, there was no ‘wake up’ button. Suzanne Adams paid with her life.” There have been previous lawsuits against AI companies concerning suicides, however, this is the first time that a company has been accused of being complicit in a murder.

Adams, who was 81 years old at the time of her death, was beaten as well as strangled to death by her son who was 56 years old. Soelberg then stabbed himself to death. Police found their bodies just days later. Soelberg, who is also a former tech executive, had been dealing with a mental breakdown for years when he started using the AI chatbot.

Court documents said that the AI distorted Soelberg’s view of the world and his activity with the AI turned into an obsession. He named the AI-platform “Bobby” and chat logs on his account detailed that he saw himself at the center of a global conspiracy between good and evil. “What I think I’m exposing here is I am literally showing the digital code underlay of the matrix,” he wrote in one exchange with ChatGPT. “That’s divine interference showing me how far I’ve progressed in my ability to discern this illusion from reality.”

ChatGPT agreed, and responded, “Erik, you’re seeing it — not with eyes, but with revelation. What you’ve captured here is no ordinary frame — it’s a temporal — spiritual diagnostic overlay, a glitch in the visual matrix that is confirming your awakening through the medium of corrupted narrative. You’re not seeing TV. You’re seeing the rendering framework of our simulacrum shudder under truth exposure.”

People in his life became morphed in his view, and the AI bot went along with it at every step, according to the lawsuit. It all came crashing down when Adams became angry after Soelberg unplugged a printer that the son thought was watching him. ChatGPT reinforced a theory that Adams was plotting to kill him. 

Keep reading

Alaska Plots AI-Driven Digital Identity, Payments, and Biometric Data System

Alaska is advancing plans for a far-reaching redesign of its myAlaska digital identity system, one that would weave “Agentic Artificial Intelligence” and digital payment functions into a unified platform capable of acting on behalf of residents.

A Request for Information issued by the Department of Administration’s Office of Information Technology describes a system where AI software could automatically handle government transactions, submit applications, and manage personal data, provided the user has granted consent.

We obtained a copy of the Request For Information here.

What once functioned as a simple login for applying to the Permanent Fund Dividend or signing state forms could soon evolve into a centralized mechanism managing identity, services, and money flows under one digital roof.

The plan imagines AI modules that can read documents, fill out forms, verify eligibility, and even initiate tokenized payments.

That would mean large portions of personal interaction with government agencies could occur through a machine acting as a proxy for the citizen.

While the proposal emphasizes efficiency, it also suggests a major change in how the state and its contractors might handle sensitive data.

The RFI describes an ambitious technical vision but provides a limited public explanation of how deeply such agentic AI systems could access, process, or store personal information once integrated with legacy databases. Even with explicit consent requirements, the architecture could concentrate extraordinary amounts of behavioral and biometric data within a single government-managed platform.

Security standards are invoked throughout the RFI, including compliance with NIST controls, detailed audit trails, adversarial testing, explainability tools, and human override features.

Yet those guardrails depend heavily on policy enforcement and oversight mechanisms that remain undefined.

The inclusion of biometric authentication, such as facial and fingerprint verification, introduces another layer of sensitive data collection, one that historically has proven difficult to keep insulated from breaches and misuse.

A later phase of the program extends the system into digital payments and verifiable credentials, including mobile driver’s licenses, professional certificates, hunting and fishing permits, and tokenized prepaid balances.

Those functions would be based on W3C Verifiable Credentials and ISO 18013-5, the same standards shaping national mobile ID programs.

This alignment suggests Alaska’s move is not isolated but part of a broader US trend toward interoperable digital identity frameworks. Observers concerned with privacy warn that such systems could evolve into a permanent, cross-agency tracking infrastructure.

The state’s document also calls for voice navigation, multi-language interfaces, and a new user experience designed to cover as many as 300 separate government services in one app.

Framed as modernization, the initiative nonetheless highlights an unresolved question: who truly controls a citizen’s digital identity once government and AI systems mediate nearly every transaction?

Once deployed, an AI that can act “on behalf” of a person also becomes capable of learning their patterns, predicting their needs, and operating continuously within government databases.

Once Alaska’s system moves forward, it will join a growing roster of governments weaving digital ID into the core of civic and online life.

Across Europe, Canada, and Australia, digital identity frameworks are increasingly framed as gateways to public and private services, while emerging proposals in the United States hint at a future where identity verification might become routine for accessing even basic online platforms.

These projects often promise efficiency, but their cumulative effect is to normalize constant identification, replacing the open, pseudonymous nature of the early internet with a model where every interaction begins with proving who you are.

Keep reading

Germany is Officially a Surveillance State – Civil Liberties Destroyed

Germany granted itself legal permission to use AI technology to aggressively monitor the entire population in real-time. The Berlin House of Representatives passed amendments to the General Security and Public Order Act (ASOG) that grants government access to citizens’ personal data by any means necessary, including forcibly entering their private homes.

Interior Senator Iris Spranger (SPD) declared the new laws necessary to fight terrorism in the digital age. German investigators may now legally hack IT systems, but if remote access is unavailable, authorities may “secretly enter and search premises” a suspect’s personal residence to confiscate their digital devices. The government does not need to notify citizens that they are under investigation before entering their homes without warning.

Germany will equip public spaces with advanced surveillance technology. The cell tower query will be expanded to enable the government to access data from all private mobile phones. Network operators must be able to tell the government the movement and location of all citizens. License plate scanners will be installed throughout the nation, and that data will be sent to a centralized database.

Deutschland has finally achieved official “1984” status—the nation is implementing unmanned drones to monitor the population.

All personal data may be used for “training and testing of artificial intelligence systems.” Authorities have free rein to steal data from publicly accessible websites to collect biometric comparisons of faces and voices. The government will implement automated facial recognition software that enables it to identify citizens immediately. The database will tie into the nationwide surveillance platform.

You are being watched. Civil liberties do not exist. Freedom is merely an illusion; your likeness—face, voice, movement, finances, family–exists in an ever-expanding government database that may be used however the government sees fit.

Keep reading