Tokyo Court Ruling Against Cloudflare Sets “Dangerous Precedent” for Internet Infrastructure Liability

Cloudflare has been ordered by a Tokyo District Court to pay 500 million yen, about 3.2 million US dollars, after judges ruled the company liable for aiding copyright infringement.

The decision, as reported by TorrentFreak, brought by a coalition of Japan’s largest manga publishers, challenges the long-held understanding that network infrastructure providers are not responsible for what passes through their systems.

It also signals a growing international push to make companies like Cloudflare police online content, an approach that could redefine how the open internet operates.

The publishers, Shueisha, Kodansha, Kadokawa, and Shogakukan, argued that Cloudflare’s global network, which caches and accelerates websites, helped pirate manga sites distribute illegal copies of their work. They said Cloudflare’s failure to verify customer identities allowed those sites to hide “under circumstances where strong anonymity was secured,” a factor the court said contributed to its finding of liability.

Cloudflare said it will appeal, calling the ruling a threat to fairness and due process and warning that it could have broad implications for the future of internet infrastructure. The company argues that its conduct complies with global norms and that it has no direct control over the content its clients publish or distribute.

The legal fight between Cloudflare and Japan’s major publishers began in 2018. The publishers asked the Tokyo District Court to intervene, claiming Cloudflare’s technology enabled piracy sites to thrive. They wanted the company to sever ties with the offending domains.

In 2019, a partial settlement was reached. The deal, later disclosed, required Cloudflare to stop replicating content from sites only after Japanese courts officially declared them illegal.

That agreement quieted the conflict for a time, but it did not resolve the larger question of whether a network service should be required to decide which content is lawful.

By early 2022, the same publishers returned to court, alleging that Cloudflare had failed to take “necessary measures” against known infringing sites.

They filed a new claim targeting four specific works and sought around four million dollars in damages. They also asked for an order that would compel Cloudflare to terminate service for illegal sites.

Keep reading

ChatGPT’s Use Of Song Lyrics Violates Copyright, Munich Court Finds

  • Judges found GEMA’s claims valid, ordering OpenAI to cease reproduction and provide damages and disclosure.
  • The court said GPT-4 and GPT-4o “memorized” lyrics, amounting to reproduction under EU copyright rules.
  • The decision, not yet final, could set a major European precedent on AI training data.

Germany’s national music rights organization secured a partial but decisive win against OpenAI after a Munich court ruled that ChatGPT’s underlying models unlawfully reproduced copyrighted German song lyrics.

The ruling orders OpenAI to cease reproduction, disclose relevant training details, and compensate rights holders.

It is not yet final, and OpenAI may appeal.

If upheld, the decision could reshape how AI companies source and license creative material in Europe, as regulators weigh broader obligations for model transparency and training-data provenance.

The case marks the first time a European court has found that a large language model violated copyright by memorizing protected works.

In its decision, the 42nd Civil Chamber of the Munich I Regional Court said that GPT-4 and GPT-4o contained “reproducible” lyrics from nine well-known songs, including Kristina Bach’s “Atemlos” and Rolf Zuckowski’s “Wie schön, dass du geboren bist.”

The court held that such memorization constitutes a “fixation” of the original works in the model’s parameters, satisfying the legal definition of reproduction under Article 2 of the EU InfoSoc Directive and Germany’s Copyright Act.

“At least in individual cases, when prompted accordingly, the model produces an output whose content is at least partially identical to content from the earlier training dataset,” a translated copy of the written judgement provided by the Munich court to Decrypt reads.

The model “generates a sequence of tokens that appears statistically plausible because, for example, it was contained in the training process in a particularly stable or frequently recurring form,” the court wrote, adding that because this “token sequence appeared on a large number of publicly accessible websites“ it meant that it was “included in the training dataset more than once.”

In the pleadings, GEMA argued that the model’s output lyrics were almost verbatim when prompted, proving that OpenAI’s systems had retained and reproduced the works.

OpenAI countered that its models do not store training data directly and that any output results from user prompts, not from deliberate copying.

The company also invoked text-and-data-mining exceptions, which allow temporary reproductions for analytical use.

“We disagree with the ruling and are considering next steps,” a spokesperson for OpenAI told Decrypt. “The decision is for a limited set of lyrics and does not impact the millions of people, businesses, and developers in Germany that use our technology every day.” 

Keep reading

Internet Archive’s legal fights are over, but its founder mourns what was lost

Last month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas.

History of the Internet Archive

Kahle has been striving since 1996 to transform the Internet Archive into a digital Library of Alexandria—but “with a better fire protection plan,” joked Kyle Courtney, a copyright lawyer and librarian who leads the nonprofit eBook Study Group, which helps states update laws to protect libraries.

When the Wayback Machine was born in 2001 as a way to take snapshots of the web, Kahle told The New York Times that building free archives was “worth it.” He was also excited that the Wayback Machine had drawn renewed media attention to libraries.

At the time, law professor Lawrence Lessig predicted that the Internet Archive would face copyright battles, but he also believed that the Wayback Machine would change the way the public understood copyright fights.

”We finally have a clear and tangible example of what’s at stake,” Lessig told the Times. He insisted that Kahle was “defining the public domain” online, which would allow Internet users to see ”how easy and important” the Wayback Machine “would be in keeping us sane and honest about where we’ve been and where we’re going.”

Kahle suggested that IA’s legal battles weren’t with creators or publishers so much as with large media companies that he thinks aren’t “satisfied with the restriction you get from copyright.”

“They want that and more,” Kahle said, pointing to e-book licenses that expire as proof that libraries increasingly aren’t allowed to own their collections. He also suspects that such companies wanted the Wayback Machine dead—but the Wayback Machine has survived and proved itself to be a unique and useful resource.

The Internet Archive also began archiving—and then lending—e-books. For a decade, the Archive had loaned out individual e-books to one user at a time without triggering any lawsuits. That changed when IA decided to temporarily lift the cap on loans from its Open Library project to create a “National Emergency Library” as libraries across the world shut down during the early days of the COVID-19 pandemic. The project eventually grew to 1.4 million titles.

But lifting the lending restrictions also brought more scrutiny from copyright holders, who eventually sued the Archive. Litigation went on for years. In 2024, IA lost its final appeal in a lawsuit brought by book publishers over the Archive’s Open Library project, which used a novel e-book lending model to bypass publishers’ licensing fees and checkout limitations. Damages could have topped $400 million, but publishers ultimately announced a “confidential agreement on a monetary payment” that did not bankrupt the Archive.

Litigation has continued, though. More recently, the Archive settled another suit over its Great 78 Project after music publishers sought damages of up to $700 million. A settlement in that case, reached last month, was similarly confidential. In both cases, IA’s experts challenged publishers’ estimates of their losses as massively inflated.

For Internet Archive fans, a group that includes longtime Internet usersresearchers, studentshistorianslawyers, and the US government, the end of the lawsuits brought a sigh of relief. The Archive can continue—but it can’t run one of its major programs in the same way.

Keep reading

The Supreme Court Fight That Could Decide Who Gets to Stay Online

A lineup of civil liberties organizations, technology companies, and internet freedom advocates has filed briefs supporting Cox Communications in a closely watched Supreme Court case that could dramatically alter how internet service providers respond to copyright complaints.

The case, Cox Communications v. Sony Music Entertainment, asks whether ISPs must terminate internet service to users accused, but not convicted, of piracy.

The Supreme Court’s upcoming decision is one of the most consequential internet-related cases in years, not just for copyright law, but for the future of how people access the internet in the United States.

At stake is a fundamental question: can internet service providers be held liable and forced to cut off internet access to users simply because they’ve been accused of copyright infringement, without any judicial process or proof?

Those standing with Cox include the Electronic Frontier Foundation (EFF), American Civil Liberties Union (ACLU), Google, Amazon, Microsoft, Mozilla, Pinterest, and X.

Kiwi Farms founder and president of the U.S. Internet Preservation Society (USIPS), Joshua Moon, also filed a brief “in support of neither party,” but opposing internet cutoffs and arguing that contributory copyright infringement is a judge-made doctrine without a statutory basis, and it has become unworkable and unfair because courts have never clearly defined its “knowledge” standard.

This vagueness, combined with the rise of the DMCA’s notice-and-takedown regime, has chilled free speech and fair use, expanded copyright monopolies, and produced abusive practices.

Cox argues that the Fourth Circuit’s ruling, which held it liable for contributory infringement and ordered a new damages trial, created an untenable standard that would force ISPs to police user activity under threat of billion-dollar judgments.

In a joint brief, the Electronic Frontier Foundation, American Library Association, and others sharply criticized the Fourth Circuit’s interpretation.

The Electronic Frontier Foundation, American Library Association, and other groups filed a joint amicus brief urging the Court to reject the liability framework adopted by the Fourth Circuit.

They argue that the ruling could lead to widespread loss of internet access based on unproven accusations, with disproportionate consequences for ordinary people.

The brief stresses the importance of online access in modern life, stating: “Internet access is essential to participation in economic, cultural, and social activity.”

The groups caution that adopting a liability standard based on mere knowledge, rather than intent or inducement, would cause serious collateral harm. The brief warns that imposing such rules on ISPs would inevitably result in users being disconnected because someone else in their household or workplace was accused of infringement.

They further argue that the Fourth Circuit’s decision improperly expands secondary copyright liability without congressional authorization, threatening to undermine constitutional protections tied to access, communication, and expression.

Even the US Department of Justice weighed in, siding with Cox on key legal questions. The DOJ told the Court that the Fourth Circuit’s approach “cannot be reconciled with this Court’s precedent” and warned against punishing providers who may simply be indifferent but not complicit.

In its brief, the DOJ stressed: “The evidence demonstrated at most that Cox was indifferent to its subscribers’ infringement, not that Cox intended to participate in that infringement or wished to bring it about.”

On the issue of willful infringement, the DOJ added that “willfulness in the civil context generally requires knowledge or reckless disregard of the fact that one’s own conduct is unlawful,” not just knowledge of someone else’s actions.

A coalition of major tech companies, Google, Amazon, Microsoft, Mozilla, and Pinterest, also submitted a unified brief opposing the lower court’s interpretation of contributory liability. They argue that the DMCA already outlines specific safe harbor rules, and the Fourth Circuit’s ruling improperly weaponizes the absence of safe harbor protection.

“The Fourth Circuit’s ruling erroneously turns Congress’s DMCA safe harbors into a liability-creating mechanism,” the companies stated.

They argued that liability should only attach to those engaged in “conscious, culpable conduct substantially assisting the primary wrongdoer.”

The brief makes clear that a finding of willfulness demands more than simple awareness: “Willfulness turns on the defendant’s mental state regarding its own conduct.”

USIPS criticized the legal foundation of the lower court’s ruling as illegitimate and warned that fear of liability is driving ISPs toward censorship.

Keep reading

Ninth Circuit Rules ISPs Can’t Be Forced to Unmask Users Under DMCA

The Ninth Circuit has ruled that internet service providers cannot be compelled to unmask users through subpoenas under the DMCA, reinforcing long-standing precedent that these subpoenas were never meant to apply to providers that simply offer access to the internet.

The decision blocks an increasingly common tactic copyright owners use to sidestep due process and extract user identities without judicial oversight.

We obtained a copy of the opinion for you here.

The case came about after a copyright owner targeted 29 Cox Communications subscribers accused of sharing the film Fall (2022) through BitTorrent. Rather than filing a lawsuit or seeking a judge’s permission, the copyright holder went straight to a court clerk to obtain a subpoena.

Although Cox was under no obligation to do so, the company notified the affected subscribers. Only one responded to object, triggering a legal dispute that has now resulted in a firm ruling from the Ninth Circuit.

The dispute is a simple but critical distinction. The DMCA outlines separate protections depending on the type of service provided.

Web hosts fall under Section 512(c) and can receive takedown notices for content they store, while IAPs are covered under Section 512(a), which protects them from liability as long as they are acting as neutral conduits for internet traffic.

Keep reading

This Hollywood-Backed Bill Would Give Government Power To Block Websites

Lawmakers in Washington are once again attempting to give the United States a legal pathway to block websites, a power the federal government has never officially held on a broad scale.

The latest push comes in the form of the Block Bad Electronic Art and Recording Distributors Act, better known as “Block BEARD,” introduced in the Senate by Thom Tillis, Chris Coons, Marsha Blackburn, and Adam Schiff.

We obtained a copy of the bill for you here.

On its face, the bill targets foreign websites accused of piracy. But the mechanism it creates would establish something far more significant: a formal, court-approved process that could be used to make entire websites vanish from the American internet.

Under the proposal, copyright owners could go to federal court to have a site labeled a “foreign digital piracy site.” If successful, the court could then order US service providers to block access to that site.

The reach is broad. The term “service provider” here mirrors the broad definition in the DMCA, potentially covering everything from ISPs and search engines to social media platforms, and perhaps even VPNs.

Proponents say this is about protecting the entertainment industry. In reality, it’s about setting a precedent. Once the government has a tool to block certain sites, history shows the definition of “unacceptable” content can expand. Piracy today could easily become something else tomorrow.

The ramifications go beyond the music and movie business. If courts can order an ISP to make a site disappear from view, the same logic could eventually apply to other types of content deemed problematic.

And because the bill has no public transparency requirements, the public could be kept entirely in the dark about which sites are blocked, why they’re blocked, or how long the blocks remain in place.

Supporters in the entertainment industry, including the RIAA and Motion Picture Association, are openly cheering the bill, pointing to similar measures overseas they claim have worked without harming free speech.

But the US is not the same as other countries. The First Amendment’s protection of speech and access to information means this kind of censorship tool carries far more constitutional baggage here than it does elsewhere.

What Block BEARD really represents is a milestone. If passed, it would be the first time the US creates a standing legal process for cutting off access to entire websites at the network level.

Keep reading

Porn Studios File Copyright Lawsuit Against Meta Claiming Mass Download of XXX Movies to Train AI

Two major porn production companies have filed a copyright lawsuit against Mark Zuckerberg’s Meta, alleging unauthorized use of their videos to train AI models.

TorrentFreak reports that the adult film studios Strike 3 Holdings and Counterlife Media are taking aim at Meta with a copyright lawsuit. The companies, which produce popular adult brands like Vixen, Tushy, Blacked, and Deeper, claim that Meta illicitly downloaded at least 2,396 of their movies via BitTorrent since 2018 for the purpose of training its AI systems, including the Meta Movie Gen and Large Language Model (LLaMA).

Filed in a California federal court, the complaint alleges that Meta’s unauthorized use of the copyrighted adult films could ultimately result in AI models capable of creating similar “high-quality” porn content at a lower cost, potentially threatening the studios’ business. The plaintiffs argue that by training specifically on their works, “Meta’s AI Movie Gen may very well soon produce full length films with Plaintiffs’ identical style and quality, which other real world adult studios cannot replicate.”

The lawsuit also accuses Meta of not only downloading the copyrighted works without permission but also uploading them to third parties participating in the same BitTorrent swarms. This allegation is allegedly backed by data from the studios’ proprietary tracking software, VXN Scan. BitTorrent’s “tit for tat” algorithm rewards users for sharing content with others to increase download speeds, and the plaintiffs claim that Meta deliberately chose to continue sharing the pirated files to capitalize on faster downloads and infringe more content at a quicker pace.

Strike 3 and Counterlife Media discovered the alleged infringements after Meta’s BitTorrent activity was revealed in a separate lawsuit filed by book authors. In that case, Meta admitted to obtaining content from pirate sources. This revelation prompted the adult studios to search their archive of collected BitTorrent data for Meta-linked IP addresses, uncovering 47 addresses owned by the company that allegedly infringed their copyrights. The complaint provides a list of thousands of alleged infringements from these addresses as evidence. Strike 3 has filed many lawsuits in the past related to videos allegedly downloaded by BitTorrent pirates, leading one judge to label them as a “copyright troll.”

Keep reading

Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

This specifically covers deepfakes, those videos of a person’s face or body that have been digitally altered so they appear to be someone else.

The Scandinavian nation has put forth a proposal to amend its copyright laws so that everyone owns the rights to their own face, their own voice, and their body. Current laws aren’t quite up to snuff when it comes to protecting people from having their likenesses twisted and contorted.

Keep reading

5 Men Sentenced in America’s Largest Internet Piracy Case: DOJ

Final judgments for five men accused of operating “one of the largest illegal television streaming services” in the United States were issued on Monday, with the individuals having already been sentenced to probation and prison terms of up to 84 months, the Department of Justice (DOJ) said in a July 22 statement.

Defendants in the case operated Jetflicks, a subscription-based site that allowed users to stream or, at times, download copyrighted content, with the service never having secured permission from the copyright holders, according to the DOJ.

They used automation to scour sites hosting pirated content, downloaded it, processed and stored it, and eventually made it available “to tens of thousands of paid subscribers” across the United States, said the department. Episodes of popular shows were often made available for streaming or download a day after their original airing date.

“At one point, Jetflicks claimed to have 183,285 different television episodes, significantly more than Netflix, Hulu, Vudu, Amazon Prime, or any other licensed streaming service,” it said.

“This was the largest internet piracy case—as measured by the estimated total infringement amount and total number of infringements—ever to go to trial as well as the first illegal streaming case ever to go to trial.”

According to the DOJ, the defendants’ conduct harmed every major copyright owner of a television program in America, with owners losing millions of dollars.

Keep reading

Mark Zuckerberg’s Meta Notches Legal Win Against Authors in AI Copyright Case

Mark Zuckerberg’s Meta has prevailed in a copyright infringement lawsuit brought by authors who claimed the company violated their rights by using millions of copyrighted books to train its AI language model, Llama. Although the decision is a win for Meta and other AI giants, the judge stated the decision was more about the plaintiffs’ poor case than about Meta’s approach to AI training.

Bloomberg Law reports that a San Francisco federal court has ruled in favor of Mark Zuckerberg’s in a lawsuit brought by a group of authors. The plaintiffs alleged that Meta had violated their copyrights by using millions of books to train its generative AI model, Llama, without obtaining permission.

Judge Vince Chhabria determined that Meta’s use of the copyrighted books for AI training falls under the fair use defense in copyright law. However, the judge cautioned that his opinion should not be interpreted as a blanket endorsement of Meta’s practices, stating that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

The judge’s decision appears to hinge on the authors’ failure to effectively argue their case, rather than a definitive legal interpretation of the fair use doctrine in the context of AI training. This suggests that future cases involving similar issues may yield different outcomes, depending on the strength of the arguments presented.

The lawsuit, which was closely watched by the tech industry and legal experts, is believed to be the first of its kind to challenge the use of copyrighted material for training AI models. As generative AI technologies continue to advance and become more prevalent, the question of how copyright law applies to the use of protected works in AI training is likely to remain a contentious issue.

A Meta spokesperson told Breitbart News, “We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

Keep reading