Spain JAILS Seven Citizens For Calling Migrants ‘SCUM’ On Facebook

Spain’s Supreme Court has upheld prison sentences for seven individuals over Facebook comments criticizing unaccompanied foreign minors in the border enclave of Melilla, marking a chilling escalation in the far-left government’s war on free speech amid skyrocketing migrant-related crime.

The ruling, which imposes terms ranging from eight months to one year and ten months, stems from posts that prosecutors deemed as promoting hostility toward the group of mostly North African migrants. 

Charges were initially dropped, but an appeal led to convictions under Spain’s hate crime laws.

This case exemplifies the inverted priorities under Prime Minister Pedro Sánchez’s Socialist-led government, which has faced mounting criticism for prioritizing mass migration over native safety and free expression.

Just months ago, Alex Soros heaped praise on Sánchez for granting amnesty to up to 500,000 illegal migrants via royal decree, bypassing parliament entirely. Soros called it “real leadership,” urging more nations to follow suit in flooding their borders.

Keep reading

Meta’s Zuckerberg denies at LA trial that Instagram targets kids

Meta Platforms chief executive Mark Zuckerberg on Feb 18 repeatedly said during 

a landmark trial over youth social media addiction that the Facebook and Instagram operator does not allow kids under 13 on its platforms, despite being confronted with evidence suggesting they were a key demographic.

Mr Mark Lanier, a lawyer for a woman suing Instagram and Google’s YouTube for harming her mental health when she was a child, pressed Mr Zuckerberg over his statement to Congress in 2024 that users under 13 are not allowed on the platform.

Mr Lanier confronted Mr Zuckerberg with internal Meta documents.

The case involves a California woman who started using Instagram and YouTube as a child. She alleges the companies sought to profit by hooking kids on their services despite knowing social media could harm their mental health.

She alleges the apps fuelled her depression and suicidal thoughts and is seeking to hold the companies liable.

Meta and Google have denied the allegations, and pointed to their work to add features that keep users safe.

“If we want to win big with teens, we must bring them in as tweens,” read one internal Instagram presentation from 2018.

Keep reading

META Granted Patent That Allows Facebook Users to Post After They Die

A patent granted to META in December gives an inside look at how Facebook users may still be able to post beyond the grave.

According to the patent, the large language model will be “simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

The AI model would essentially replicate the deceased person’s digital presence by analyzing data such as their writing style, tone, and expressed beliefs, allowing it to continue posting content and even commenting on friends’ posts in their voice.

It gets stranger.

The patent also referenced technology that would allow users to have phone calls with the deceased or even video calls.

Per Business Insider:

In the patent, Meta lays out why it thinks people might need this.

If you’re no longer posting online — whether that’s because you need a break from social media or … you … die — your followers’ user experience will be affected. In short, they’ll miss you.

“The impact on the users is much more severe and permanent if that user is deceased and can never return to the social networking platform,” the document says.

To fill that void, Meta would essentially create a digital clone of your social media presence, training a model on “user-specific” data — including historical platform activity, such as comments, likes, or content — to understand how you would (or rather, did) behave.

The patent was filed by META’s CTO, Andrew Bosworth.

Despite being granted the patent, META has claimed it has no plans to implement the technology.

In recent years, the idea of using AI to bring someone back from the dead digitally has gained some momentum.

In 2025, the family of Christopher Pelkey, who was killed in a road rage incident in Arizona, used an AI-generated video of Pelkey as an impact statement in the courtroom.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading

Meta, TikTok, YouTube Face Trial Over Youth Addiction Claims

Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.

The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out.

She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits.

This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading

Meta Chose Revenue Over Policing Chinese Scam Ads, Documents Show

Meta knowingly tolerated large volumes of fraudulent advertising from China to protect billions of dollars in revenue, a new investigation from Reuters unveiled this week. Internal documents show executives prioritized minimizing “revenue impact” over fully cracking down on scams, illegal gambling, pornography and other banned ads.

Although Meta platforms are blocked inside China, Chinese companies are allowed to advertise to users abroad, according to Reuters. That business grew rapidly, reaching more than $18 billion in revenue in 2024—about 11% of Meta’s global sales. Internal estimates showed roughly 19% of that revenue, more than $3 billion, came from prohibited or fraudulent ads.

Meta documents reviewed by Reuters describe China as the company’s top “Scam Exporting Nation,” responsible for roughly a quarter of scam ads worldwide. Victims ranged from U.S. and Canadian investors to consumers in Taiwan. An internal presentation warned, “We need to make significant investment to reduce growing harm.”

In 2024, Meta briefly did just that. A dedicated China-focused anti-fraud team cut problematic ads roughly in half, from 19% to 9% of China-related revenue. But after what one document described as an “Integrity Strategy pivot and follow-up from Zuck,” the team was asked to pause its work. Meta later disbanded the unit, lifted restrictions on new Chinese ad agencies, and shelved additional anti-scam measures.

Keep reading

Australia: Meta begins deactivating accounts ahead of 16-year-old minimum social media age limit

Meta has begun removing social media accounts belonging to Australian children under 16 years old from its platforms, Instagram, Facebook and Threads.

The tech giant has started notifying users aged 13 to 15 years old that their accounts would cease to exist on December 4th. Starting December 10th, social media companies will face fines up to A$49.5 million ($33million USD) should they fail to take steps to halt children under 16 years old from owning accounts.

Australian eSafety Commissioners will send major platforms notices on December 11th demanding statistics about exactly how many accounts were removed from their sites. Additionally, monthly notices are planned for 2026.

It is estimated that 150,000 Facebook accounts and 325,000 Instagram users will be terminated. 

“The government recognizes that age assurance may require several days or weeks to complete fairly and accurately,” Communications Minister Anika Wells reported.

“However, if eSafety identifies systemic breaches of the law, the platforms will face fines,” she added.

Google sent out a notice on Wednesday stating that anyone in Australia under 16 would be signed out of YouTube on December 10th and will lose features accessible available only to account holders, such as playlists.

Google states it determines YouTube users’ ages “based on personal data contained in associated Google accounts and other signals.”

“We have consistently said this rushed legislation misunderstands our platform, the way young Australians use it and, most importantly, it does not fulfill its promise to make kids safer online,” a Google statement reported.

Users over 16 years old who were wrongfully revoked account access have the option to verify their age through government-issued ID or a Video selfie, per Meta.

Platforms such as X and Reddit contacted underage users, suggesting that they download their posted pictures and freeze their accounts until they become of age.

The Australian government claims the ban will protect children from the harms of social media. However, critics say this decision may isolate certain groups who depend on the platforms for connection and push children to other, potentially more harmful corners of the internet.

Keep reading

Meta Pushes Canada for App Store Age Verification ID Laws

Meta is working to convince the Canadian government to introduce new laws that would make age verification mandatory at the app store level. The company has been lobbying Ottawa for months and says it has received positive feedback from officials drafting online safety legislation.

To support its push, Meta paid Counsel Public Affairs to poll Canadians on what kinds of digital safety measures they want for teens.

The poll found that 83 percent of parents favor requiring app stores to confirm users’ ages before app downloads.

Meta highlighted those results, saying “the Counsel data clearly indicates that parents are seeking consistent, age-appropriate standards that better protect teens and support parents online. And the most effective way to understand this is by obtaining parental approval and verifying age on the app store.”

Rachel Curran, Meta Canada’s director of public policy, described the idea as “by far the most effective, privacy-protective, efficient way to determine a user’s age.”

That phrase may sound privacy-conscious, but in practice, the plan would consolidate control over personal data inside a small circle of corporations such as Meta, Apple, and Google, while forcing users to identify themselves to access basic online services.

Google has criticized Meta’s proposal, calling it an attempt to avoid direct responsibility. “Time and time again, all over the world, you’ve seen them push forward proposals that would have app stores change their practices and do something new without any change by Meta,” said Kareem Ghanem, Google’s senior director of government affairs.

Behind these corporate disputes lies a much bigger question: should anyone be required to verify their identity in order to use the internet?

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading