Judge: Yardley Officials Illegally Deleted Criticism on Facebook

A federal judge has determined that leaders of Yardley Borough, Buck’s County, Pennsylvania, unlawfully silenced a resident when they deleted his comment from the local government’s Facebook page.

We obtained a copy of the order for you here.

The controversy centers on Earl Markey, a corporate trainer and active member of the local Republican committee.

In October 2022, Markey posted a comment on the Yardley Boro Facebook page urging voters to back a referendum that would have trimmed the borough council from seven members to five.

His comment was sharply critical of a sitting councilman.

Markey wrote, “Appointed Councilman Matt Curtin wants to raise property taxes by two mills. Stop unelected, out of touch investment bankers, like Matt Curtin, from volunteering our hard-earned money for higher taxes. Vote YES on the referendum to reduce the size of the Yardley Borough Council.”

Not long after, the comment disappeared.

The borough’s manager, Paula Johnson, labeled the post a personal attack. Council President Caroline Thompson approved its removal.

Markey saw this as a clear act of censorship and took legal action, filing a lawsuit against Thompson, Johnson, and the borough. He also named two other officials who were eventually removed from the case.

“For me that crossed a line,” Markey said. He described the deletion as “censorship by public officials.”

Although borough leaders tried to defuse the matter by letting Markey repost his comment, reimbursing his legal filing fee, and drafting a revised social media policy, Markey pressed forward with the lawsuit.

Keep reading

Supreme Court Declines to Hear Children’s Health Defense Case Alleging Meta-Government Collusion to Censor Vaccine Speech

The Supreme Court has turned down a petition from Children’s Health Defense (CHD), leaving unexamined the group’s challenge to what it describes as collusion between Meta and federal agencies aimed at suppressing constitutionally protected speech.

The case, which dates back to 2020, centered on CHD’s allegations that Facebook, under government influence, targeted and silenced its views on vaccines and COVID-19.

CHD filed its initial complaint against Meta in August 2020 and later amended it, arguing that government officials worked alongside the tech giant to stifle dissenting opinions that the public had every right to hear.

The removal of CHD’s Facebook and Instagram accounts in 2022 only fueled concerns about the growing threat of coordinated censorship in digital spaces.

Those accounts, once a major source of alternative information for millions, remain banned.

In a decision last year, the 9th Circuit Court of Appeals sided with Meta, holding that the company’s actions reflected private policy choices rather than government compulsion.

CHD hoped the Supreme Court would take up the case and examine whether that distinction truly shields Big Tech from accountability when it suppresses speech at the behest of public officials. But the justices declined, offering no explanation, as is typical in most of the thousands of petitions they review annually.

Keep reading

Mark Zuckerberg’s Meta Notches Legal Win Against Authors in AI Copyright Case

Mark Zuckerberg’s Meta has prevailed in a copyright infringement lawsuit brought by authors who claimed the company violated their rights by using millions of copyrighted books to train its AI language model, Llama. Although the decision is a win for Meta and other AI giants, the judge stated the decision was more about the plaintiffs’ poor case than about Meta’s approach to AI training.

Bloomberg Law reports that a San Francisco federal court has ruled in favor of Mark Zuckerberg’s in a lawsuit brought by a group of authors. The plaintiffs alleged that Meta had violated their copyrights by using millions of books to train its generative AI model, Llama, without obtaining permission.

Judge Vince Chhabria determined that Meta’s use of the copyrighted books for AI training falls under the fair use defense in copyright law. However, the judge cautioned that his opinion should not be interpreted as a blanket endorsement of Meta’s practices, stating that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

The judge’s decision appears to hinge on the authors’ failure to effectively argue their case, rather than a definitive legal interpretation of the fair use doctrine in the context of AI training. This suggests that future cases involving similar issues may yield different outcomes, depending on the strength of the arguments presented.

The lawsuit, which was closely watched by the tech industry and legal experts, is believed to be the first of its kind to challenge the use of copyrighted material for training AI models. As generative AI technologies continue to advance and become more prevalent, the question of how copyright law applies to the use of protected works in AI training is likely to remain a contentious issue.

A Meta spokesperson told Breitbart News, “We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

Keep reading

Executives from Meta, Palantir, OpenAI Join Army Innovation Corps Dubbed ‘Detachment 201’

Top executives from Silicon Valley giants Palantir, Meta, and OpenAI are trading their corporate suits for military uniforms as they join a newly formed Army Reserve unit focused on upgrading military technology.

The Wall Street Journal reports that in an unprecedented collaboration between the U.S. military and the tech industry, a group of Silicon Valley executives are set to be sworn in as officers in the Army Reserve on Friday. The inaugural cohort of this new innovation corps, dubbed Detachment 201, includes Shyam Sankar, CTO of Palantir; Andrew “Boz” Bosworth, CTO of Meta Platforms; Kevin Weil, Chief Product Officer at OpenAI; and Bob McGrew, a former OpenAI executive.

The tech recruits will serve around 120 hours a year, focusing on projects that leverage their expertise in AI, data analysis, and other cutting-edge technologies to enhance the Army’s capabilities. Their mission is to help the military prepare for future conflicts, which are expected to heavily rely on ground robots, drones, sensor networks, and AI-powered coordination systems.

Gen. Randy George, the Army’s chief of staff, emphasized the importance of this collaboration, stating, “We need to go faster, and that’s exactly what we are doing here.” The program marks a significant shift in the relationship between the Pentagon and the tech industry, which was once hesitant to work on defense-related projects.

The tech reservists will enjoy some unique accommodations, such as the flexibility to work remotely and asynchronously, and will be exempt from basic training. They will hold the rank of lieutenant colonel due to their private-sector status and will be deployed based on their specialized skills, making it unlikely for them to find themselves in combat situations.

Instead, the executives will work on projects that teach soldiers how to use AI-powered systems or utilize health data to improve fitness. They will also advise the service on acquiring more commercial technology and help the Defense Department recruit other high-tech talent. To avoid conflicts of interest, the recruits will not work on projects involving their employers and will be firewalled from sharing information or participating in projects that could provide financial gain to themselves or their companies.

Keep reading

No Mark Zuckerberg, AI ‘Friends’ are not Good for Mental Health

Think you could use a few more friends? Meta CEO Mark Zuckerberg says AI will do the trick. In a recent interview with podcaster Dwarkesh Patel, the Silicon Valley titan said the average American has fewer than three friends but a desire to have “something like fifteen.”

Zuckerberg thinks computer code will fill the gap: “The average person wants more connectivity, connection than they have,” he observed. “As the personalization loop kicks in, and the AI just starts to get to know you better, I think that will be just really compelling.” 

It’s interesting advice from a guy who heads up two of the largest platforms on the planet for bringing people together.

It’s also an admission from Zuckerberg that chatting with real people isn’t cutting it anymore.

His solution? More technology, not less. Meta has made billions of dollars monetizing our attention. Why not monetize our loneliness, too?  

Turns out it’s a bad time to tell us to make AI friends when we’re already struggling to navigate our digital lives. In 2023, US Surgeon General Vivek Murthy warned of an epidemic of loneliness and isolation.

“One in two adults in America is living with measurable levels of loneliness,” Murthy reported, “but the numbers are even higher among young people.” He pointed to social media and online interactions as a driving factor. 

And we’re not just lonely. Rates of depression and anxiety are on the rise, too, again particularly in our youth.

According to Centers for Disease Control and Prevention data published last month, the prevalence of depression in people age 12 and older has nearly doubled in a decade, jumping from 8.2% between 2013 and 2014 to 13.1% between 2021 and 2023.

Of course, Zuckerberg knew his products were negatively impacting young people years ago.

In 2021, The Wall Street Journal revealed that Facebook, which owns Instagram, had internal evidence showing Instagram use is linked with poorer mental health, particularly among young women.

Facebook buried its findings and failed to address the problem.

Zuckerberg doesn’t seem to understand that the struggle is real for millions of Americans who are finding it anything but easy to manage their well-being around constant online stimulation: “People are smart. They know what’s valuable in their lives,” Zuckerberg told Patel. “I think people have a good sense of what they want.”

Keep reading

Meta Is Accused of Flouting Privacy Rules With AI Training Data

Meta’s attempt to restart AI training using Europeans’ public social media activity has drawn renewed resistance, as the privacy rights organization noyb threatens fresh legal action. The group has formally challenged Meta’s latest move to mine user data, asserting the tech giant is sidestepping EU privacy obligations and advancing without regulatory clearance.

Following a halt in June 2024 prompted by regulatory concerns, Meta announced in April it would resume training its language models. This time, it intends to use public posts and user interactions, including with Meta AI, from adults across the European Union and European Economic Area.

The initial pause came after mounting pressure from the Irish Data Protection Commission and a wave of complaints submitted to authorities in various member states. According to Meta, a December opinion from the European Data Protection Board signaled that its approach satisfied legal standards.

Keep reading

Joe Rogan Guest Reveals Facebook’s Secret Experiment That Manipulated 700,000 Users Without Warning

Joe Rogan sat down with Harvard professor and mind control expert Rebecca Lemov, and it didn’t take long for the conversation to dive into one of his favorite topics: government interference in our digital lives.

Rogan opened the conversation by saying, “There are so many different kinds of mind control.”

“One of the things we’ve talked about a lot on this podcast is, that an enormous percentage of what you’re seeing on social media in terms of interactions and debate is not real. It’s not organic,” he explained.

“It’s state-run and state-funded, and it’s whether it’s foreign governments or our government or even corporations, you’re getting inorganic discourse that’s designed to form a narrative and which is a form of mind control,” he added.

Lemov picked up on that point and took it further. Even when people know something is fake, she explained, our brains still react as if it were real.

“Yeah. I mean, I think even on a basic level, people, it’s known and studies have shown that we respond as if it were organic and real,” she said.

“Even when somebody likes a post of yours, the response is the same as, like, in-person interaction,” she added.

It’s not just governments pulling the strings, she warned. The platforms themselves are designed to influence how we feel.

“I think at the root, there is a kind of way that, on an emotional level, it’s not just manipulation of ideas,” she said, “but there’s a kind of emotional engineering that’s built into the platforms and doesn’t even demand, you know, at first, government involvement.”

Keep reading

Meta’s Oversight Board Condemns Free Speech Reforms, Pushes for Continued Censorship Over “Human Rights” Concerns

Welcomed by opponents of the multi-year Big Tech-government censorship collusion in the US, Meta’s decision to abandon its controversial “fact-checking” program is receiving criticism from others.

Among them is Meta’s Oversight Board, which is also unhappy that the company is allowing more freedom to users of its platforms when it comes to discussing issues like gender identity and immigration.

Announcing the changes earlier this year, CEO Mark Zuckerberg admitted that the “moderation” system in place until that point had produced “too many mistakes and too much censorship.”

The content was flagged, downranked, or removed, as were users if either censorship algorithms or third-party “fact-checkers” decided it contained “misinformation” or “hate speech” – and the criteria for this was heavily biased in favor of the former administration’s agendas.

But looking back at the policy shift announced on January 7, the Board expressed its concern that Meta went about this “hastily, in a departure from the regular procedure, with no public information shared as to what, if any, prior human rights due diligence the company performed.”

The Board decided to put the emphasis on “human rights” rather than free speech, in particular the handling of topics related to LGB and transgender issues.

Meta’s policy now allows users to make allegations about mental illness or abnormality “when based on gender or sexual orientation.” This is essentially explained as a way to allow the discourse about transgenderism and homosexuality that is already happening in society, in political and religious contexts.

But the Board thinks this may result in human rights violations and wants Meta to investigate whether that is happening, and to “update it” on the findings twice a year.

Just how restrictive the rules around these issues have been is illustrated in the two cases the Board was considering – attempts to ban videos expressing views about the participation of transgender persons in sports and their “access” to bathrooms.

Keep reading

Senate Probes Meta Over Alleged Censorship Tools and Data Sharing Ties with Chinese Communist Party, Whistleblower Testifies

US Senate Judiciary Committee, Subcommittee on Crime and Terrorism, on Wednesday organized a hearing about Meta’s alleged work to develop custom censorship tools for China’s Communist Party (CCP) and share user data with China – which Meta denies.

Whistleblower and former Facebook Director Global Policy Sara Wynn-Williams, who left the company at some point around 2018, presented her testimony during the hearing dubbed, “A Time for Truth: Oversight of Meta’s Foreign Relations and Representations to the United States Congress.”

Senator Josh Hawley, who chaired the meeting, showed internal Facebook documents that Wynn-Williams previously shared with Congress, that appeared to corroborate the whistleblower’s claims.

Wynn-Williams accused Meta executives of “repeatedly” undermining US national security and betraying American values as they allegedly set out to build “an $18 billion business in China” and work directly with the CCP, including censoring a Chinese dissident.

According to her testimony, Meta executives are guilty of lying to employees, shareholders, Congress, and the American public about the giant’s dealings with China, which she dubbed “illegal and dangerous,” dating back to 2015.

One of Wynn-Williams’ allegations is that Facebook’s “moderation” tools for the CCP allowed those using them to censor access to content in entire regions, or on particular dates, such as the anniversary of the Tiananmen Square massacre.

Wynn-Williams also claimed that Meta was willing to allow access to user data, including that of Americans, as it built a physical pipeline between the US and China, which the latter country could have used to intercept information.

“The only reason China does not currently have access to US user data through this pipeline is because Congress stepped in,” she told the committee.

Keep reading

Sen. Hawley Considers Criminal Referral for Zuckerberg — Demands Testimony Under Oath After Whistleblower Exposes Meta’s Dangerous Collusion with Communist China

The globalist empire of Big Tech is being dragged into the light, and U.S. Senator Josh Hawley (R-MO) is leading the charge.

In a scathing letter sent Thursday, Hawley invited Meta CEO Mark Zuckerberg to testify before the Senate Judiciary Subcommittee on Crime and Counterterrorism, following explosive testimony from a former Meta executive turned whistleblower.

Sarah Wynn-Williams, who served as Facebook’s Director of Global Public Policy from 2011 to 2017, delivered a bombshell under oath: Meta didn’t just sell out American users — it surrendered to the Chinese Communist Party.

In her riveting testimony, Wynn-Williams exposed Meta’s secret project—code-named “Project Aldrin”—an initiative that allegedly handed China’s Communist regime access to sensitive U.S. artificial intelligence technologies.

Her most alarming claim? That Meta executives deliberately briefed Chinese officials on cutting-edge AI to give Beijing the upper hand over American companies.

“These briefings focused on critical emerging technologies, including artificial intelligence – explicit goal being to help China outcompete American companies,” said Wynn-Williams.

She warned that Meta’s LLaMA AI model is already being used by China in military applications and stated that the only reason the CCP doesn’t currently have unrestricted access to U.S. user data is because Congress intervened.

This is no longer just about privacy—it’s about national survival in the face of an aggressive foreign adversary, aided and abetted by America’s own Big Tech titans.

Keep reading