Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading

Judge: Yardley Officials Illegally Deleted Criticism on Facebook

A federal judge has determined that leaders of Yardley Borough, Buck’s County, Pennsylvania, unlawfully silenced a resident when they deleted his comment from the local government’s Facebook page.

We obtained a copy of the order for you here.

The controversy centers on Earl Markey, a corporate trainer and active member of the local Republican committee.

In October 2022, Markey posted a comment on the Yardley Boro Facebook page urging voters to back a referendum that would have trimmed the borough council from seven members to five.

His comment was sharply critical of a sitting councilman.

Markey wrote, “Appointed Councilman Matt Curtin wants to raise property taxes by two mills. Stop unelected, out of touch investment bankers, like Matt Curtin, from volunteering our hard-earned money for higher taxes. Vote YES on the referendum to reduce the size of the Yardley Borough Council.”

Not long after, the comment disappeared.

The borough’s manager, Paula Johnson, labeled the post a personal attack. Council President Caroline Thompson approved its removal.

Markey saw this as a clear act of censorship and took legal action, filing a lawsuit against Thompson, Johnson, and the borough. He also named two other officials who were eventually removed from the case.

“For me that crossed a line,” Markey said. He described the deletion as “censorship by public officials.”

Although borough leaders tried to defuse the matter by letting Markey repost his comment, reimbursing his legal filing fee, and drafting a revised social media policy, Markey pressed forward with the lawsuit.

Keep reading

Supreme Court Declines to Hear Children’s Health Defense Case Alleging Meta-Government Collusion to Censor Vaccine Speech

The Supreme Court has turned down a petition from Children’s Health Defense (CHD), leaving unexamined the group’s challenge to what it describes as collusion between Meta and federal agencies aimed at suppressing constitutionally protected speech.

The case, which dates back to 2020, centered on CHD’s allegations that Facebook, under government influence, targeted and silenced its views on vaccines and COVID-19.

CHD filed its initial complaint against Meta in August 2020 and later amended it, arguing that government officials worked alongside the tech giant to stifle dissenting opinions that the public had every right to hear.

The removal of CHD’s Facebook and Instagram accounts in 2022 only fueled concerns about the growing threat of coordinated censorship in digital spaces.

Those accounts, once a major source of alternative information for millions, remain banned.

In a decision last year, the 9th Circuit Court of Appeals sided with Meta, holding that the company’s actions reflected private policy choices rather than government compulsion.

CHD hoped the Supreme Court would take up the case and examine whether that distinction truly shields Big Tech from accountability when it suppresses speech at the behest of public officials. But the justices declined, offering no explanation, as is typical in most of the thousands of petitions they review annually.

Keep reading

Mark Zuckerberg’s Meta Notches Legal Win Against Authors in AI Copyright Case

Mark Zuckerberg’s Meta has prevailed in a copyright infringement lawsuit brought by authors who claimed the company violated their rights by using millions of copyrighted books to train its AI language model, Llama. Although the decision is a win for Meta and other AI giants, the judge stated the decision was more about the plaintiffs’ poor case than about Meta’s approach to AI training.

Bloomberg Law reports that a San Francisco federal court has ruled in favor of Mark Zuckerberg’s in a lawsuit brought by a group of authors. The plaintiffs alleged that Meta had violated their copyrights by using millions of books to train its generative AI model, Llama, without obtaining permission.

Judge Vince Chhabria determined that Meta’s use of the copyrighted books for AI training falls under the fair use defense in copyright law. However, the judge cautioned that his opinion should not be interpreted as a blanket endorsement of Meta’s practices, stating that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

The judge’s decision appears to hinge on the authors’ failure to effectively argue their case, rather than a definitive legal interpretation of the fair use doctrine in the context of AI training. This suggests that future cases involving similar issues may yield different outcomes, depending on the strength of the arguments presented.

The lawsuit, which was closely watched by the tech industry and legal experts, is believed to be the first of its kind to challenge the use of copyrighted material for training AI models. As generative AI technologies continue to advance and become more prevalent, the question of how copyright law applies to the use of protected works in AI training is likely to remain a contentious issue.

A Meta spokesperson told Breitbart News, “We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

Keep reading

Executives from Meta, Palantir, OpenAI Join Army Innovation Corps Dubbed ‘Detachment 201’

Top executives from Silicon Valley giants Palantir, Meta, and OpenAI are trading their corporate suits for military uniforms as they join a newly formed Army Reserve unit focused on upgrading military technology.

The Wall Street Journal reports that in an unprecedented collaboration between the U.S. military and the tech industry, a group of Silicon Valley executives are set to be sworn in as officers in the Army Reserve on Friday. The inaugural cohort of this new innovation corps, dubbed Detachment 201, includes Shyam Sankar, CTO of Palantir; Andrew “Boz” Bosworth, CTO of Meta Platforms; Kevin Weil, Chief Product Officer at OpenAI; and Bob McGrew, a former OpenAI executive.

The tech recruits will serve around 120 hours a year, focusing on projects that leverage their expertise in AI, data analysis, and other cutting-edge technologies to enhance the Army’s capabilities. Their mission is to help the military prepare for future conflicts, which are expected to heavily rely on ground robots, drones, sensor networks, and AI-powered coordination systems.

Gen. Randy George, the Army’s chief of staff, emphasized the importance of this collaboration, stating, “We need to go faster, and that’s exactly what we are doing here.” The program marks a significant shift in the relationship between the Pentagon and the tech industry, which was once hesitant to work on defense-related projects.

The tech reservists will enjoy some unique accommodations, such as the flexibility to work remotely and asynchronously, and will be exempt from basic training. They will hold the rank of lieutenant colonel due to their private-sector status and will be deployed based on their specialized skills, making it unlikely for them to find themselves in combat situations.

Instead, the executives will work on projects that teach soldiers how to use AI-powered systems or utilize health data to improve fitness. They will also advise the service on acquiring more commercial technology and help the Defense Department recruit other high-tech talent. To avoid conflicts of interest, the recruits will not work on projects involving their employers and will be firewalled from sharing information or participating in projects that could provide financial gain to themselves or their companies.

Keep reading

No Mark Zuckerberg, AI ‘Friends’ are not Good for Mental Health

Think you could use a few more friends? Meta CEO Mark Zuckerberg says AI will do the trick. In a recent interview with podcaster Dwarkesh Patel, the Silicon Valley titan said the average American has fewer than three friends but a desire to have “something like fifteen.”

Zuckerberg thinks computer code will fill the gap: “The average person wants more connectivity, connection than they have,” he observed. “As the personalization loop kicks in, and the AI just starts to get to know you better, I think that will be just really compelling.” 

It’s interesting advice from a guy who heads up two of the largest platforms on the planet for bringing people together.

It’s also an admission from Zuckerberg that chatting with real people isn’t cutting it anymore.

His solution? More technology, not less. Meta has made billions of dollars monetizing our attention. Why not monetize our loneliness, too?  

Turns out it’s a bad time to tell us to make AI friends when we’re already struggling to navigate our digital lives. In 2023, US Surgeon General Vivek Murthy warned of an epidemic of loneliness and isolation.

“One in two adults in America is living with measurable levels of loneliness,” Murthy reported, “but the numbers are even higher among young people.” He pointed to social media and online interactions as a driving factor. 

And we’re not just lonely. Rates of depression and anxiety are on the rise, too, again particularly in our youth.

According to Centers for Disease Control and Prevention data published last month, the prevalence of depression in people age 12 and older has nearly doubled in a decade, jumping from 8.2% between 2013 and 2014 to 13.1% between 2021 and 2023.

Of course, Zuckerberg knew his products were negatively impacting young people years ago.

In 2021, The Wall Street Journal revealed that Facebook, which owns Instagram, had internal evidence showing Instagram use is linked with poorer mental health, particularly among young women.

Facebook buried its findings and failed to address the problem.

Zuckerberg doesn’t seem to understand that the struggle is real for millions of Americans who are finding it anything but easy to manage their well-being around constant online stimulation: “People are smart. They know what’s valuable in their lives,” Zuckerberg told Patel. “I think people have a good sense of what they want.”

Keep reading

Meta Is Accused of Flouting Privacy Rules With AI Training Data

Meta’s attempt to restart AI training using Europeans’ public social media activity has drawn renewed resistance, as the privacy rights organization noyb threatens fresh legal action. The group has formally challenged Meta’s latest move to mine user data, asserting the tech giant is sidestepping EU privacy obligations and advancing without regulatory clearance.

Following a halt in June 2024 prompted by regulatory concerns, Meta announced in April it would resume training its language models. This time, it intends to use public posts and user interactions, including with Meta AI, from adults across the European Union and European Economic Area.

The initial pause came after mounting pressure from the Irish Data Protection Commission and a wave of complaints submitted to authorities in various member states. According to Meta, a December opinion from the European Data Protection Board signaled that its approach satisfied legal standards.

Keep reading

Joe Rogan Guest Reveals Facebook’s Secret Experiment That Manipulated 700,000 Users Without Warning

Joe Rogan sat down with Harvard professor and mind control expert Rebecca Lemov, and it didn’t take long for the conversation to dive into one of his favorite topics: government interference in our digital lives.

Rogan opened the conversation by saying, “There are so many different kinds of mind control.”

“One of the things we’ve talked about a lot on this podcast is, that an enormous percentage of what you’re seeing on social media in terms of interactions and debate is not real. It’s not organic,” he explained.

“It’s state-run and state-funded, and it’s whether it’s foreign governments or our government or even corporations, you’re getting inorganic discourse that’s designed to form a narrative and which is a form of mind control,” he added.

Lemov picked up on that point and took it further. Even when people know something is fake, she explained, our brains still react as if it were real.

“Yeah. I mean, I think even on a basic level, people, it’s known and studies have shown that we respond as if it were organic and real,” she said.

“Even when somebody likes a post of yours, the response is the same as, like, in-person interaction,” she added.

It’s not just governments pulling the strings, she warned. The platforms themselves are designed to influence how we feel.

“I think at the root, there is a kind of way that, on an emotional level, it’s not just manipulation of ideas,” she said, “but there’s a kind of emotional engineering that’s built into the platforms and doesn’t even demand, you know, at first, government involvement.”

Keep reading

Meta’s Oversight Board Condemns Free Speech Reforms, Pushes for Continued Censorship Over “Human Rights” Concerns

Welcomed by opponents of the multi-year Big Tech-government censorship collusion in the US, Meta’s decision to abandon its controversial “fact-checking” program is receiving criticism from others.

Among them is Meta’s Oversight Board, which is also unhappy that the company is allowing more freedom to users of its platforms when it comes to discussing issues like gender identity and immigration.

Announcing the changes earlier this year, CEO Mark Zuckerberg admitted that the “moderation” system in place until that point had produced “too many mistakes and too much censorship.”

The content was flagged, downranked, or removed, as were users if either censorship algorithms or third-party “fact-checkers” decided it contained “misinformation” or “hate speech” – and the criteria for this was heavily biased in favor of the former administration’s agendas.

But looking back at the policy shift announced on January 7, the Board expressed its concern that Meta went about this “hastily, in a departure from the regular procedure, with no public information shared as to what, if any, prior human rights due diligence the company performed.”

The Board decided to put the emphasis on “human rights” rather than free speech, in particular the handling of topics related to LGB and transgender issues.

Meta’s policy now allows users to make allegations about mental illness or abnormality “when based on gender or sexual orientation.” This is essentially explained as a way to allow the discourse about transgenderism and homosexuality that is already happening in society, in political and religious contexts.

But the Board thinks this may result in human rights violations and wants Meta to investigate whether that is happening, and to “update it” on the findings twice a year.

Just how restrictive the rules around these issues have been is illustrated in the two cases the Board was considering – attempts to ban videos expressing views about the participation of transgender persons in sports and their “access” to bathrooms.

Keep reading