Musk found guilty of misleading investors

Tesla and SpaceX CEO Elon Musk has been found guilty in California of misleading investors during his $44 billion purchase of Twitter in 2022.

The class-action lawsuit, which had been filed shortly before Musk took control of the social media platform that he subsequently re-branded as X, focused on two tweets and comments made by the tech billionaire during a podcast in May 2022. Following those statements, including a post claiming that the Twitter deal was “temporarily on hold,” the company’s shares plunged by almost 10% in a single session.

The nine-man jury in San Francisco delivered its verdict on Friday, stating that the tech billionaire did mislead the shareholders, who sold Twitter shares at a lower price as a result of his announcements, with the tweets.

However, it also found that there was nothing wrong with what Musk said on the podcast and that he did not intentionally “scheme” to mislead the investors.

Keep reading

‘Roblox’ Programmer Faces 40 Child Porn Charges.

Jamie Borne, a 30-year-old probationer, was arrested in New Orleans after probation officers discovered a child-sized sex doll and electronic devices suspected of containing child sexual abuse material during a routine visit to his home. Borne, who identified himself as a programmer for the Roblox gaming platform, was serving probation for a 2023 conviction involving smoke grenades and a firearm.

Louisiana Attorney General Liz Murrill confirmed that ICE Homeland Security Investigations (HSI) and the Louisiana State Police were involved in the arrest. During the visit on February 26, probation officers reportedly found the child-sized sex doll in Borne’s bedroom, along with child clothing, condoms, and electronic devices believed to contain material related to children under 13 years old.

Borne was booked on 40 counts of possession of child sexual abuse material and one count of possession, trafficking, or importing a child sex doll. He is being held on a bond of $50,000 per count. Louisiana state law stipulates severe penalties for these offenses, including hard labor sentences of up to 20 years without parole for possession of child sexual abuse material and additional penalties for trafficking or importing child sex dolls.

Roblox, the gaming platform where Borne worked, has faced scrutiny in recent years for its lack of age restrictions and alleged facilitation of predatory behavior. According to Louisiana Attorney General Murrill, Roblox allows access to millions of games, including some that allegedly feature explicit content. Murrill has filed a lawsuit against Roblox, citing its failure to implement proper safeguards for minors.

The platform reportedly has 151.5 million daily users, with a significant portion being minors. Twenty percent of the user base is under nine years old, and another 20 percent falls between the ages of nine and twelve. Critics argue that this demographic makeup makes it especially urgent for Roblox to address safety concerns.

Keep reading

EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers

The EU’s own diplomatic service has published a report admitting that X makes its data more accessible to researchers than other major platforms, and then used that admission to brand X the primary channel of “foreign information manipulation and interference” against the bloc.

The European External Action Service (EEAS) put this in writing. The media ran with the conclusion and buried the caveat.

The fourth annual FIMI Threats report, released this month, found that “88% of instances were concentrated on the platform X. The presence of CIB networks, the ease of creation of fabricated accounts, but also more straightforward access to data, explains this concentration.

Most of the major social media platforms restrict access to data that would allow for assessing the magnitude of information manipulation activities.”

Keep reading

UK Regulator Ofcom Has Fined 4chan £520,000 Under a Law That Doesn’t Apply in the US

Ofcom has now fined 4chan £520,000 ($691,572) under the Online Safety Act. The platform hasn’t paid a penny and isn’t intending to. Its lawyer replied to the latest demand with a picture of a hamster.

That’s the state of UK online speech regulation in 2026: a regulator issuing fines to American websites, receiving rodent-themed correspondence in return, and collecting almost nothing.

The breakdown: £450,000 for failing to put age verification in place, £50,000 for failing to assess the risk of illegal material being published, and £20,000 for failing to set out in its terms of service how it protects users from criminal content. Ofcom says 4chan must comply by April 2 or face daily penalties on top.

But this confrontation and push for 4chan to start checking IDs didn’t start with a £520,000 fine. It started with an email sent across the Atlantic to a company that owes the UK government nothing.

4chan is an American platform. Its registered in Delaware. Its servers are in the United States. It has no employees in Britain, no offices in Britain, no legal registration in Britain, and no business presence of any kind in Britain. It is, in every meaningful sense, none of Ofcom’s business.

And what good would the First Amendment be if it could be overridden by foreign demands?

When the Online Safety Act came into full force, Ofcom declared that any site with “links to the UK” had duties to protect UK users, regardless of where in the world it was based.

That phrase, “links to the UK,” is intentionally vague, allowing British authorities to demand compliance from virtually any website. Under that logic, any American platform that a British person can visit is subject to UK speech law. No presence required. No UK operations required. Ofcom thinks it has jurisdiction over planet Earth.

Beginning in April 2025, Ofcom sent a “legally binding information notice” to 4chan’s corporate services company, by email, demanding compliance with the Online Safety Act and threatening that failure could “constitute a criminal offence” resulting in a fine of £18 million or 10% of 4chan’s worldwide turnover, arrest, and imprisonment for up to two years.

The notice was sent to a company not authorized to accept service on 4chan’s behalf. No UK court had issued it. No treaty process had been followed. It was, legally speaking, a strongly worded email.

Preston Byrne, the attorney representing 4chan, described the regulator’s actions as “an illegal campaign of harassment” directed at American tech firms, and made clear his client would not comply: “4chan has broken no laws in the United States, my client will not pay any penalty.”

By June 2025, Ofcom had opened a formal investigation.

Byrne’s reply was characteristically direct: “Increasing the size of a censorship fine does not cure its legal invalidity in the United States.” He continued: “After an entire year of your agency’s spectacular failure to get the memo, my only suggestion is that you take a first-year course on U.S. constitutional law.”

In August 2025, 4chan and Kiwi Farms took the fight to the US federal courts. The lawsuit, filed in the US District Court for the District of Columbia, argues that the Online Safety Act is not only an unlawful extraterritorial power grab but a direct attack on foundational American liberties. The complaint states: “Where Americans are concerned, the Online Safety Act purports to legislate the Constitution out of existence.”

The platforms argue that Ofcom’s demands, including written “risk assessments,” content moderation systems, removal of speech deemed “illegal” by UK standards, and user identity verification, would require violating the First Amendment and Section 230 of the Communications Decency Act. Byrne told reporters: “American citizens do not surrender our constitutional rights just because Ofcom sends us an email.”

Keep reading

Social Media Panic Lands Joseph Gordon-Levitt a U.N. Gig

Joseph Gordon-Levitt has a new gig, but it’s not in Hollywood. On Tuesday, the actor was appointed as the United Nations’ (U.N.) first global advocate for human-centric digital governance. 

In this role, Gordon-Levitt will “strengthen public understanding of how digital technologies shape everyday life, rights and opportunities,” according to a U.N. press release. In other words, he will be one of the U.N.’s chief advocates for regulating social media platforms.

In a video explaining his jargon-filled title, Gordon-Levitt warned that social media is causing an “epidemic of mental health issues and loneliness,” and a “rise in polarization and extremism and authoritarianism.” He said “governments need to get in the game” and curb these “damaging side effects” from social media. 

This is not the first time Gordon-Levitt has advocated for crackdowns on online platforms. In February, Gordon-Levitt traveled to Capitol Hill, where he urged senators to pass the Sunset Section 230 Act. The bill, introduced by Sens. Lindsey Graham (R–S.C.) and Dick Durbin (D–Ill.), would repeal Section 230—the federal law that limits platforms’ liability for third party speech—two years after the date of enactment. 

The “first step” in combatting the negative influence of Big Tech is to “sunset Section 230,” he said. “I want to see this thing pass 100 to zero. There should be nobody voting to give any more impunity to these tech companies, nobody.”

After receiving backlash for these comments, including from journalist Taylor Lorenz, Gordon-Levitt clarified that he didn’t want to completely scrap Section 230; he only wanted to reform it. 

During his speech on Capitol Hill, Gordon-Levitt invoked his authority as a concerned father of three to push for more online safety regulations. But emotional pleas do not always make for good policy. In fact, protecting children online has motivated more than a dozen bills in the House alone, many of which would infringe on free speech and privacy. 

One of these bills, the Reducing Exploitative Social Media Exposure for Teens (RESET) Act, would ban anyone under the age of 16 from creating or maintaining social media accounts. Another, the App Store Accountability Act, would require age verification for access to app stores and parental consent for users under 18. Most notably, the controversial Kids Online Safety Act (KOSA) would require online platforms to enforce policies and procedures to “address” various “harms to minors.” Reason’s Elizabeth Nolan Brown notes that KOSA would compel platforms to “censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.”

What proponents of these bills often fail to recognize is the many benefits that social media can offer kids. According to a 2022 Pew Research Center poll among teenagers, just 9 percent said that social media had a mostly negative effect on their lives. Citing the upsides of friendships and connections, 32 percent said social media had a mostly positive effect on them. Another study found that disconnection was a greater threat to adolescents’ self-esteem than heavy social media use, challenging the narrative that social media causes isolation. 

Keep reading

Tehran Residents’ Social Media Offer Nightmarish Glimpse of Humanitarian Crisis

Residents of Iran’s sprawling capital, Tehran, woke up earlier this month to scenes that they could probably imagine only in horror films. Clouds of smoke had shrouded the metropolis of 10 million, toxic rain blended with oil poured down from the sky, and the sun remained invisible through noon on the morning of March 8. Hours earlier, Israel had launched airstrikes on 30 oil facilities in Tehran and nearby regions, causing explosions that killed six people in the city of Karaj.

On social media, people in Iran who managed to bypass the ongoing internet blackout posted images of their cars being cloaked by thick layers of black toxicants that were the residues of an acid rain precipitated by the strikes the night before. Scientists have raised the alarm that long-term exposure to the compounds contained in this “black rain” may potentially increase the risk of cancer, cardiovascular conditions, and cognitive impairment among the inhabitants of Tehran.

When the United States and Israel launched their unprovoked and incoherent war on Iran on February 28, international observers decried the attack, questioning its legal basis in the absence of authorization by the UN Security Council and the U.S. Congress. Even some U.S. allies such as Italy’s Prime Minister Giorgia Meloni have criticized the war as being illegal.

Now, as the civilian death toll increases after more than two weeks of bombing, and as the destruction promised by U.S. Secretary of Defense Pete Hegseth materializes across Iranian cities, more Iranians are reflecting on why their lives unraveled and why the international community has failed them despite earlier declarations of support from influential U.S. and European politicians during the nationwide protests that rocked the country in December and January.

Keep reading

Brazil Launches Mandatory Age Verification Law for Online Platforms

Brazil’s Digital ECA (Estatuto da Criança e do Adolescente Digital) took effect today, March 17, requiring nearly every tech product accessible to children to clear a long list of compliance obligations.
Apps, operating systems, app stores, video games, social networks: all potentially covered, all facing fines of up to 50 million Brazilian reais (roughly US$9.44 million) or 10% of their Brazilian revenue for non-compliance.

As always, the framing is child protection. The infrastructure being built is a national age verification system woven into the fabric of internet access.

“Brazil has stepped forward as the first country in Latin America to pass a dedicated law to protect children’s online privacy and safety,” goes the official line.

Every major technology platform operating in Brazil must now determine how old its users are and restrict what they can see accordingly. The checkbox that said “I am over 18” is explicitly banned.

Article 37’s sole paragraph states that regulations “may not, under any circumstances, impose, authorize, or result in the implementation of mechanisms of massive, generic, or indiscriminate surveillance.”

Then Article 9 bans self-reported age. Article 12 demands “auditable” verification. The law prohibits the only mechanism that would make the law work.

Auditable, non-self-declaration age verification requires collecting something real about you.

The law permits a range of methods: government ID, biometric face scanning, behavioral pattern analysis that watches how you type and what you click, age inference from activity data, and educational history.

Every single one of these collects sensitive personal information and creates a record. There is no method on the approved list that doesn’t involve building exactly the kind of identity infrastructure Article 37 claims to forbid. The legislators either didn’t notice the contradiction or they noticed and didn’t care.

The obligation falls on platforms, not directly on every individual user. But the effect is the same. Platforms that want to comply need to verify who you are and how old you are before showing you restricted content. If you want to see it, you provide the data. If you don’t provide the data, you don’t get access.

Keep reading

Russia’s Rumored Telegram Block Appears Underway As Outage Reports Surge

Reports are flooding in from across Russia that Telegram is suddenly going dark, fueling speculation that the Kremlin may already be testing a nationwide block ahead of a rumored planned crackdown next month.

“Over the last 24 hours, Telegram has effectively stopped working through some providers if you are using Russian IP addresses,” tech sector observer Vladislav Voytenko told Kommersant FM on Monday. “As for using Telegram via mobile internet, you can basically forget about it,” he added.

Russia’s Main Radio Frequency Center, an arm of media watchdog Roskomnadzor, said a surge of complaints began appearing over the weekend, with at least one-third coming from Moscow, followed by St. Petersburg and other cities spread across the country’s vast 11 time zones.

Regional media has tracked user reports on outage monitors such as Downdetector and Sboi.rf, which show complaints spiking sharply over the weekend as the app began failing across multiple regions.

Some Russian users have described the platform is barely functioning “in any form”. They complain the app won’t open, messages won’t send, and neither will photos and videos load.

Keep reading

Transgender Reddit Moderator Deletes Account After Being Exposed As Registered Child Sex Offender

Reddit users are expressing outrage after learning that a moderator involved in three of the largest trans-focused subreddits is a child sex offender, and may have received protection from another transgender moderator. Branden Michael Dunleavy, also known as Brynn or u/ranshin-da-anarchist, has deleted his Reddit account since the revelations.

Information about Dunleavy’s identity first began circulating on Reddit on Saturday after user u/Living-East-8486 posted a warning on his account claiming that u/Cedarwolf, the top moderator at r/MTF, “attempted to keep a convicted child predator on the r/MTF moderator team.”

In his post, u/Living-East-8486 described a dispute that had emerged on r/MTF involving u/Cedarwolf, hinting that the two had a falling out that ultimately had prompted u/Living-East-8486 to come forward and clarify the allegations.

“The primary thing I’ve been pondering is how on earth I can actually bring this to light without making it seem like I’m just trying to get revenge. To be honest, I don’t think there is any foolproof way for me to do that. Frankly, I’m furious over this situation,” u/Living-East-8486 wrote. “This moderator’s primary reasoning for excluding me from the community was to ‘protect the children’ from my weird ass kink posts (which were always marked 18+). Then I find out that they are openly defending a child predator being on the mod team.”

u/Living-East-8486 included a number of screenshots from what appeared to be private Discord conversations in late 2025.

In one message, top r/MTF moderator u/Cedarwolf, under the handle Cedarpaw, expresses a reluctance to remove Dunleavy from the team because “the next few years are going to be very hard for trans folks,” and instead suggests Dunleavy change his username to better hide his identity.

u/Cedarwolf suggests Dunleavy can establish a fresh Reddit reputation under the “new” username by “spending a week” on other subreddits.

Keep reading

Florida Gives Tech Platforms Deadline for Age ID Checks

Florida’s attorney general has handed tech companies an ultimatum: build identity verification systems into your platforms by April 8, or his office starts filing lawsuits.

The deadline comes as a federal appeals court hears arguments this week on whether the state can legally force millions of users to prove who they are before accessing social media.

The law driving this, HB 3, bans anyone under 14 from social media entirely and requires parental consent for 14- and 15-year-olds. It also forces adult content sites to verify visitors are 18 or older.

Attorney General James Uthmeier gave tech companies 30 days to implement age restrictions and 60 days to deploy parental consent mechanisms. “It is the law of the land,” he said at an Orlando event on March 9. Non-compliance means litigation.

What Florida is actually mandating is a digital ID checkpoint at the entrance to the internet. The law doesn’t specify which verification methods qualify as “reasonable.” It doesn’t cap how long platforms can retain identity documents. It doesn’t limit what platforms can do with the surveillance infrastructure once it’s built. Florida gets the policy win.

Users hand over their documents. The data sits in corporate systems indefinitely, available for breaches, subpoenas, and purposes nobody has disclosed yet.

Uthmeier even named TikTok and Discord specifically. Discord’s attempt to introduce digital ID age verification has been met with much backlash, especially after a leak over over 70,000 government IDs. Uthmeier appears unconcerned.

NetChoice, co-plaintiff in the legal challenge, named this directly: the law creates a security risk by “mandating the surrender of sensitive information.” That’s the part Florida’s child-protection framing is designed to obscure. Every minor blocked from TikTok requires millions of adults to first prove they aren’t minors. The verification burden falls on everyone.

Keep reading