Senators Call For Probe Into Meta After News Report On AI Conversations With Children

Two Republican Senators on Aug. 14 called for a congressional investigation into Meta Platforms, Facebook’s parent company, after a recent news media report revealed an internal policy document that allowed the company’s chatbots to have “romantic or sensual” conversations with a child.

On Thursday, Reuters reported that it had viewed a Meta policy document detailing polices on chatbot behavior that permitted the technology to “engage a child in conversations that are romantic or sensual,” generate incorrect medical information, and assist users in arguing that black people are “dumber than white people.”

While Meta confirmed the authenticity of the document, the company said that after recently receiving questions from Reuters, it removed the portions stating that the chatbot is allowed to flirt or participate in romantic roleplay with children.

Andy Stone, a spokesperson for Meta, said the company is currently revising the documents and that those types of conversations with children should never have been permitted.

The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

On the X platform, Sen. Josh Hawley (R-Mo.) criticized the company for allegedly only making the changes after being questioned by Reuters.

So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children,’” Hawley said. “This is grounds for an immediate congressional investigation.”

A spokesperson for Sen. Marsha Blackburn (R-Tenn.) said she supports a probe into Meta.

After Hawley called for an investigation, a Meta spokesperson reiterated the company’s previous statement. However, the spokesperson declined to comment on Hawley’s remarks.

Blackburn said the report underscores the need to pass reforms for better protection of children online, like the Kids Online Safety Act. The senator co-sponsored the bill, which passed in the Senate last year by a bipartisan 91–3 vote, but failed in the House.

The bill called for certain platforms, such as social media networks, to “take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying).”

Additionally, covered platforms must provide (1) minors with certain safeguards, such as settings that restrict access to minors’ personal data; and (2) parents or guardians with tools to supervise minors’ use of a platform, such as control of privacy and account settings,” the bill states.

Keep reading

Meta’s flirty AI chatbot invited a retiree to New York. He never made it home.

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

A representative for Jenner declined to comment.

Keep reading

Meta Found Guilty Of Eavesdropping On Period-Tracker App Users: Jury

A San Francisco Jury found on Friday that Meta had eavesdropped on the users of a popular period-tracking app, Flo.

The lawsuit, was filed in 2021 by eight women against Flo and a group of other tech companies including Google and Facebook, now known as Meta. The app asked users about their sex lives, mental health and diets before guiding them through menstruation and pregnancy. The women, who based their claims on a 2019 Wall Street Journal story and a 2021 FTC investigation, allege that Flo then shared some of that data with the tech giants, SFGATE reports.

Google, Flo, and analytics company Flurry all settled with the plaintiffs, however Meta fought through the entire trial and lost. 

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of “Custom App Events” — such as a user clicking a particular button in the “wanting to get pregnant” section of the app.

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health. -SFGate

The jury didn’t buy Meta’s argument – ruling against them in a unanimous decision, and finding that Flo’s users had a reasonable expectation that they weren’t being overheard or recorded, and found that Meta did not have consent to eavesdrop or record. The company was found to have violated California’s Invasion of Privacy Act.

According to a June filing about the case’s class-action status, over 3.7 million women in the United States registered for Flo between November 2016 and February 2019 – with potential claimants expected to be notified via email and on a case website. 

Keep reading

Porn Studios File Copyright Lawsuit Against Meta Claiming Mass Download of XXX Movies to Train AI

Two major porn production companies have filed a copyright lawsuit against Mark Zuckerberg’s Meta, alleging unauthorized use of their videos to train AI models.

TorrentFreak reports that the adult film studios Strike 3 Holdings and Counterlife Media are taking aim at Meta with a copyright lawsuit. The companies, which produce popular adult brands like Vixen, Tushy, Blacked, and Deeper, claim that Meta illicitly downloaded at least 2,396 of their movies via BitTorrent since 2018 for the purpose of training its AI systems, including the Meta Movie Gen and Large Language Model (LLaMA).

Filed in a California federal court, the complaint alleges that Meta’s unauthorized use of the copyrighted adult films could ultimately result in AI models capable of creating similar “high-quality” porn content at a lower cost, potentially threatening the studios’ business. The plaintiffs argue that by training specifically on their works, “Meta’s AI Movie Gen may very well soon produce full length films with Plaintiffs’ identical style and quality, which other real world adult studios cannot replicate.”

The lawsuit also accuses Meta of not only downloading the copyrighted works without permission but also uploading them to third parties participating in the same BitTorrent swarms. This allegation is allegedly backed by data from the studios’ proprietary tracking software, VXN Scan. BitTorrent’s “tit for tat” algorithm rewards users for sharing content with others to increase download speeds, and the plaintiffs claim that Meta deliberately chose to continue sharing the pirated files to capitalize on faster downloads and infringe more content at a quicker pace.

Strike 3 and Counterlife Media discovered the alleged infringements after Meta’s BitTorrent activity was revealed in a separate lawsuit filed by book authors. In that case, Meta admitted to obtaining content from pirate sources. This revelation prompted the adult studios to search their archive of collected BitTorrent data for Meta-linked IP addresses, uncovering 47 addresses owned by the company that allegedly infringed their copyrights. The complaint provides a list of thousands of alleged infringements from these addresses as evidence. Strike 3 has filed many lawsuits in the past related to videos allegedly downloaded by BitTorrent pirates, leading one judge to label them as a “copyright troll.”

Keep reading

Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading

Judge: Yardley Officials Illegally Deleted Criticism on Facebook

A federal judge has determined that leaders of Yardley Borough, Buck’s County, Pennsylvania, unlawfully silenced a resident when they deleted his comment from the local government’s Facebook page.

We obtained a copy of the order for you here.

The controversy centers on Earl Markey, a corporate trainer and active member of the local Republican committee.

In October 2022, Markey posted a comment on the Yardley Boro Facebook page urging voters to back a referendum that would have trimmed the borough council from seven members to five.

His comment was sharply critical of a sitting councilman.

Markey wrote, “Appointed Councilman Matt Curtin wants to raise property taxes by two mills. Stop unelected, out of touch investment bankers, like Matt Curtin, from volunteering our hard-earned money for higher taxes. Vote YES on the referendum to reduce the size of the Yardley Borough Council.”

Not long after, the comment disappeared.

The borough’s manager, Paula Johnson, labeled the post a personal attack. Council President Caroline Thompson approved its removal.

Markey saw this as a clear act of censorship and took legal action, filing a lawsuit against Thompson, Johnson, and the borough. He also named two other officials who were eventually removed from the case.

“For me that crossed a line,” Markey said. He described the deletion as “censorship by public officials.”

Although borough leaders tried to defuse the matter by letting Markey repost his comment, reimbursing his legal filing fee, and drafting a revised social media policy, Markey pressed forward with the lawsuit.

Keep reading

Supreme Court Declines to Hear Children’s Health Defense Case Alleging Meta-Government Collusion to Censor Vaccine Speech

The Supreme Court has turned down a petition from Children’s Health Defense (CHD), leaving unexamined the group’s challenge to what it describes as collusion between Meta and federal agencies aimed at suppressing constitutionally protected speech.

The case, which dates back to 2020, centered on CHD’s allegations that Facebook, under government influence, targeted and silenced its views on vaccines and COVID-19.

CHD filed its initial complaint against Meta in August 2020 and later amended it, arguing that government officials worked alongside the tech giant to stifle dissenting opinions that the public had every right to hear.

The removal of CHD’s Facebook and Instagram accounts in 2022 only fueled concerns about the growing threat of coordinated censorship in digital spaces.

Those accounts, once a major source of alternative information for millions, remain banned.

In a decision last year, the 9th Circuit Court of Appeals sided with Meta, holding that the company’s actions reflected private policy choices rather than government compulsion.

CHD hoped the Supreme Court would take up the case and examine whether that distinction truly shields Big Tech from accountability when it suppresses speech at the behest of public officials. But the justices declined, offering no explanation, as is typical in most of the thousands of petitions they review annually.

Keep reading

Mark Zuckerberg’s Meta Notches Legal Win Against Authors in AI Copyright Case

Mark Zuckerberg’s Meta has prevailed in a copyright infringement lawsuit brought by authors who claimed the company violated their rights by using millions of copyrighted books to train its AI language model, Llama. Although the decision is a win for Meta and other AI giants, the judge stated the decision was more about the plaintiffs’ poor case than about Meta’s approach to AI training.

Bloomberg Law reports that a San Francisco federal court has ruled in favor of Mark Zuckerberg’s in a lawsuit brought by a group of authors. The plaintiffs alleged that Meta had violated their copyrights by using millions of books to train its generative AI model, Llama, without obtaining permission.

Judge Vince Chhabria determined that Meta’s use of the copyrighted books for AI training falls under the fair use defense in copyright law. However, the judge cautioned that his opinion should not be interpreted as a blanket endorsement of Meta’s practices, stating that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

The judge’s decision appears to hinge on the authors’ failure to effectively argue their case, rather than a definitive legal interpretation of the fair use doctrine in the context of AI training. This suggests that future cases involving similar issues may yield different outcomes, depending on the strength of the arguments presented.

The lawsuit, which was closely watched by the tech industry and legal experts, is believed to be the first of its kind to challenge the use of copyrighted material for training AI models. As generative AI technologies continue to advance and become more prevalent, the question of how copyright law applies to the use of protected works in AI training is likely to remain a contentious issue.

A Meta spokesperson told Breitbart News, “We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

Keep reading

Executives from Meta, Palantir, OpenAI Join Army Innovation Corps Dubbed ‘Detachment 201’

Top executives from Silicon Valley giants Palantir, Meta, and OpenAI are trading their corporate suits for military uniforms as they join a newly formed Army Reserve unit focused on upgrading military technology.

The Wall Street Journal reports that in an unprecedented collaboration between the U.S. military and the tech industry, a group of Silicon Valley executives are set to be sworn in as officers in the Army Reserve on Friday. The inaugural cohort of this new innovation corps, dubbed Detachment 201, includes Shyam Sankar, CTO of Palantir; Andrew “Boz” Bosworth, CTO of Meta Platforms; Kevin Weil, Chief Product Officer at OpenAI; and Bob McGrew, a former OpenAI executive.

The tech recruits will serve around 120 hours a year, focusing on projects that leverage their expertise in AI, data analysis, and other cutting-edge technologies to enhance the Army’s capabilities. Their mission is to help the military prepare for future conflicts, which are expected to heavily rely on ground robots, drones, sensor networks, and AI-powered coordination systems.

Gen. Randy George, the Army’s chief of staff, emphasized the importance of this collaboration, stating, “We need to go faster, and that’s exactly what we are doing here.” The program marks a significant shift in the relationship between the Pentagon and the tech industry, which was once hesitant to work on defense-related projects.

The tech reservists will enjoy some unique accommodations, such as the flexibility to work remotely and asynchronously, and will be exempt from basic training. They will hold the rank of lieutenant colonel due to their private-sector status and will be deployed based on their specialized skills, making it unlikely for them to find themselves in combat situations.

Instead, the executives will work on projects that teach soldiers how to use AI-powered systems or utilize health data to improve fitness. They will also advise the service on acquiring more commercial technology and help the Defense Department recruit other high-tech talent. To avoid conflicts of interest, the recruits will not work on projects involving their employers and will be firewalled from sharing information or participating in projects that could provide financial gain to themselves or their companies.

Keep reading