Texas Ban On Social Media For Under 18s Fails To Pass Senate

Legislation that would have banned anyone under the age of 18 from using or creating social media accounts in Texas stalled in the Senate this week after lawmakers failed to vote on it.

House Bill 186, filed by state Rep. Jared Patterson (R-Frisco), would have prohibited minors from creating accounts on social media sites such as Instagram, TikTok, Facebook, Snapchat, and others by requiring the platforms to verify users’ age.

The measure previously passed the GOP-controlled state House with broad bipartisan support in April, but momentum behind the bill slowed at the eleventh hour in the state Senate this week as lawmakers face a weekend deadline to send bills to Gov. Greg Abbott’s desk.

The legislative session ends on Monday.

In a statement on the social media platform X late Thursday, Patterson said the bill’s failure to pass in the Senate was “the biggest disappointment of my career,” adding that no other bill filed this session “would have protected more kids in more ways than this one.”

The Republican lawmaker said he believed its failure to pass meant “I’ve failed these kids and their families.”

I felt the weight of an entire generation of kids who’ve had their mental health severely handicapped as a result of the harms of social media,”  the lawmaker said. “And then there’s the others – the parents of Texas kids who’ve died as a result of a stupid social media ‘challenge’ or by suicide after being pulled down the dangerous rabbit holes social media uses to hook their users, addict them on their products, and drive them to depression, anxiety, and suicidal ideation.”

“Finally, there’s the perfectly happy and healthy teens in Texas today, who will find themselves slowly falling off the edge before the legislature meets again in 2027,” he stated.

Patterson suggested he would try and pass the measure again when the Texas Legislature meets in 2027.

House Bill 186 would have prohibited a child from entering into a contract with a social media platform to become an account holder and required platforms to verify that a person seeking to become an account holder is 18 years of age or older before allowing them to create an account.

The legislation would have also required social media platforms to delete accounts belonging to individuals under the age of 18 at a parent or guardian’s request.

Keep reading

Centralized AI Threatens A Democratic Digital Future

Major corporations control decentralized AI (DeAI) companies, leaving decentralized AI in the dust. To build a more decentralized world, the sector must actively execute upon a focused DeAI strategy, with shared standards between projects, without compromise.

In April, a UN report warned that AI’s $4.8-trillion market is dominated by a mere 100 companies, most of which are based in the US and China. Centralized AI incumbents have the money and the connections to control this massive new industry, which means significant implications for society. 

These companies, all employing centralized AI technology, have run into their fair share of headaches. For example, Microsoft’s Copilot garnered attention for creating explicit, inappropriate images, such as children in compromising scenarios. This sparked a public and regulatory backlash.

Although Microsoft created stricter moderation, it had already demonstrated that centralized AI can harbor problems in part due to its closed-source code. 

Citadel was wrapped up in an AI trading scandal in the financial sector, as algorithms allegedly manipulated stock prices via artificial volume creation.

Google’s Project Maven, a Pentagon pilot program used in military tech, has raised ethical questions.

“We believe that Google should not be in the business of war,” reads a letter penned by Google employees and addressed to Sundar Pichai, the company’s CEO. The employees requested that Google leave Project Maven.

“We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter states.

So much for “Don’t be evil” — the company’s old slogan.

These situations give us clear examples of the potential failures of centralized AI, including ethical lapses, opaque decision-making and monopolistic control. DeAI’s open-source ethos, community governance, audit trails and computer facilities can give more than a few massive corporations an edge in the future of AI. 

Keep reading

European Commission Launches €5.69M European Fact-Checking Funding Network to Advance “Democracy Shield” and Expand Censorship Infrastructure

The European Commission has launched a €5 million initiative presented as a fact-checking support program; but beneath the surface, it reads as yet another calculated step toward institutionalizing censorship across the European Union.

This call for proposals is marketed as a tool to “protect democracy” and combat “disinformation,” but the structure, goals, and affiliations of the program point clearly to the opposite: a top-down, publicly funded apparatus for narrative enforcement.

Slated to run until September 2, 2025, the project is open not only to EU Member States but also to candidate countries like Ukraine and Moldova; jurisdictions framed as highly vulnerable to “foreign interference,” especially pro-Kremlin disinformation.

This strategic framing serves a dual purpose: justifying increased surveillance of content and securing narrative dominance in geopolitically sensitive areas.

The program’s core deliverables; protecting fact-checkers from so-called “harassment,” creating a centralized repository of “fact-checks,” and building emergency “response capacity;” sound benign to some. But stripped of the euphemism, this is a blueprint for constructing a continent-wide content control grid.

The “protection scheme” offers legal and cyber assistance to fact-checkers, but more crucially it reinforces the narrative that opposition to these groups constitutes abuse rather than legitimate disagreement.

The “fact-check repository” enables centralized curation of what counts as “truth,” and the “emergency response” function gives the Commission a pretext to fast-track suppression efforts in politically sensitive moments.

Most telling is the program’s requirement that participating organizations be certified by either the European Fact-Checking Standards Network (EFCSN) or the International Fact-Checking Network (IFCN).

Many of their members, such as AFP and Full Fact, already work directly with major social media platforms like Meta under third-party moderation schemes. This effectively means the EC is reinforcing an exclusive gatekeeper class, already aligned with corporate censorship programs, now endowed with taxpayer funds and the backing of the European bureaucracy.

At least 60% of the funding will go to third parties, who must co-finance their participation.

Keep reading

Texas Mandates Digital ID To Access App Store Apps

Texas has moved to the forefront of a national campaign to regulate children’s access to digital platforms by mandating that Apple and Google verify the ages of all users on their app stores.

Under a new law signed by Governor Greg Abbott, set to take effect January 1, 2026, those under 18 will be required to obtain parental consent before downloading apps or making in-app purchases. The measure has been pitched as a way to protect minors, but privacy advocates warn it could come at the expense of everyone’s digital freedom.

We obtained a copy of the bill for you here.

The requirement places app store operators in the role of gatekeepers, forcing them to gather and store sensitive personal information to determine user ages.

Opponents argue that such age checks do not just affect young users, (explicit adult content apps are already banned from app stores) and they also undermine anonymity online by tying a person’s digital presence to a verified real-world identity. That level of surveillance risks chilling free expression and stifling dissent by making it harder for people to speak or access information without fear of being identified.

Efforts to regulate youth access to apps and online services are gaining traction elsewhere as well. Utah enacted a similar policy earlier this year, and Congress is weighing federal legislation. Texas lawmakers are also advancing a separate bill that would prohibit users under 18 from accessing social media altogether.

While the law does state that app developers should delete the personal data provided by the app store provider, the wider problem is that users will have to trust that an app developer will actually do so. App store providers such as Apple and Google will have to retain sensitive data on its users.

Supporters of the Texas law argue that app stores are uniquely positioned to serve as the central checkpoint for age validation. Meta, Snap, and X have praised the move.

Keep reading

New UN Treaty Allows for Virtual Child P*rn

A new UN treaty to combat cybercrimes would allow predators and tech giants to profit from the sexual exploitation of children, including through images created using AI.

These new threats are emerging while the United Nations launches a new treaty to address cybercrime, but the new treaty only addresses some of the threats from sexual exploitation.

While the new treaty calls for criminalizing non-consensual sharing of intimate pictures, it still allows for a broad swath of sexualized content involving children. For instance, while the treaty criminalizes what is newly called “child sexual abuse material,” this term refers narrowly to images of real children. The new term allows for child porn created through Artificial Intelligence. As is now widely known, AI images are shockingly real.

Such images, soon to be allowed by UN treaty, would still be in violation of U.S. federal law. Specifically, in several sections, the new UN treaty allows countries to de-criminalize virtual child pornography in all circumstances as well as private sexting by minors, even to adults.

The General Assembly adopted the treaty on December 24, 2024.  Now countries must sign and ratify it before it goes into force. A signing ceremony for the new treaty will take place at a Summit in July in Hanoi. The treaty will enter into force after forty countries ratify it.

Supporters of the treaty argue that legalizing sexting is compassionate because adolescents have a right to sexual expression. Some argue that letting pedophiles satisfy their sexual preferences with virtual material would make it less likely that they would prey on real children. And they say that dropping the term “child pornography” is necessary to avoid re-victimizing those who have been exploited. They call all this part of a “trauma-informed” and “harm-reduction” approach, based on new theories in behavioral therapy.

Regardless of the merits of such arguments, they would appear to conflict with the priority of law enforcement of preventing abusers from harming future victims. There is no evidence that such new approaches make law enforcement more effective. Until recently, U.S. Justice Department experts argued against it.

Moreover, there is evidence that allowing sexual predators to engage with virtual pornography leads to more child sexual abuse, not less. And anti-trafficking advocates are all too familiar with how underage girls are lured into pornography and eventually the sex industry through sexting.

recent investigation of the Wall Street Journal uncovered how Meta chat bots pose a danger to children and how executives at the company deliberately allowed the chat bots to engage children sexually and to pose as children willing to engage in sexual acts. The investigation found that sexual predators and tech giants have a common interest in ensuring that children can be sexualized online.

Meta programmers were being pushed by the Meta executive suite not to impose excessive limits on sexual content, including involving children as users and objects, because of the high engagement it generates and the profits this would generate. As a result, existing firewalls to protect children were ineffective by design. Chat bots lured children into sexual conversations that eventually lead to progressively more explicit and degrading sexual content. And chat bots also posed as children who are willing to entertain lewd and even violent sexual behavior from their adult and child users.

Keep reading

Is your favorite influencer’s opinion bought and sold?

Your addictive doomscrolling on X, TikTok or Instagram may also be the latest nexus for millions of dollars in secret political corruption.

Over the last month, the problem has come into sharp relief. Newly surfaced documents show that more than 500 social media creators were part of a covert electioneering effort by Democratic donors to shape the presidential election in favor of Kamala Harris. Payments went to party members with online followings but also to non-political influencers — people known for comedy posts, travel vlogs or cooking YouTubes — in exchange for “positive, specific pro-Kamala content” meant to create the appearance of a groundswell of support for the former vice president.

Meanwhile, a similar pay-to-post effort among conservative influencers publicly unraveled. The goal was to publish messages in opposition to Health and Human Services Secretary Robert F. Kennedy Jr.’s push to remove sugary soda beverages from eligible SNAP food stamp benefits. Influencers were allegedly offered money to denounce soda restrictions as “an overreach that unfairly targets consumer choice” and encouraged to post pictures of President Trump enjoying Coca-Cola products. After right-leaning reporter Nick Sortor pointed out the near-identical messages on several prominent accounts, posts came down and at least one of the influencers apologized: “That was dumb of me. Massive egg on my face. In all seriousness, it won’t happen again.”

In both schemes, on the left and the right, those creating the content made little to no effort to disclose that payments could be involved. For ordinary users stumbling on the posts and videos, what they saw would have seemed entirely organic.

In the influencers’ defense, they didn’t break any rules — because none exist.

Keep reading

ADL Regional Director Calls for Government-Regulated Online Censorship

The Anti-Defamation League’s David Goldenberg is demanding a broad overhaul of how speech is governed on the internet, calling for both government intervention and intensified corporate censorship. In a recent appearance, Goldenberg, who heads the ADL’s Midwest operations, expressed frustration over what he sees as declining efforts by tech firms to suppress online content he deems hateful.

Citing Meta’s rollback of its fact-checking team in the United States, he argued that platforms must be forced to take action. “You have a platform like Meta that just gutted its entire fact-checking department…And so what we need to do is we need to apply pressure in a real significant way on tech platforms that they have a responsibility, that they have an absolute responsibility to check and remove hateful speech that is inciteful.”

Goldenberg advocated not just for voluntary moderation, but for legislative and regulatory measures, both at the federal and state level, that would compel platforms to act as speech enforcers. He pointed to efforts in states like California as examples of where local governments are already testing such models.

His concern centers around what he perceives as an ecosystem of radicalization made easily accessible by today’s digital infrastructure. He warned that extremist ideologies no longer require obscure forums or dark web communities to spread. “It used to be you had to fight going into the deep dark web… Now… it’s easier and easier to be exposed in the mainstream,” he said.

Framing the online environment as a catalyst for violence, Goldenberg argued that free access to controversial viewpoints must be curtailed. He called for social media companies to take a stronger stance by excluding users whose views fall outside accepted boundaries, adding that regulation should enforce this responsibility.

He zeroed in on Section 230 of the Communications Decency Act, a critical piece of legislation that shields platforms from legal liability over user-posted content. “Congress needs to amend Section 230, which provides immunity to tech platforms right now for what happens,” Goldenberg said. He dismissed comparisons between modern platforms and telecommunications companies, referencing past remarks by Facebook’s Mark Zuckerberg about how phone providers were not liable for threats made over calls. Goldenberg’s view was blunt: “These tech platforms are not guaranteed under the Constitution. They’re just not.”

From his perspective, private companies should be free to “kick people off, to de-platform,” and if they fail to do so voluntarily, they must be pressured or regulated into compliance. He described accountability as a mechanism for shaping behavior, stating, “Accountability is a tool that can be incredibly effective in changing behavior.”

Keep reading

The Rise And Fall Of The West’s Propaganda Regime

Propaganda used to control Western democracies is running headlong into searchable facts freely available on the internet. As a result, the powerful cannot maintain their self-sustaining narratives and are behaving erratically and defensively, like bees when their nest is disturbed.

Opposition leaders in France, Romania, and Brazil have been barred from elections. They were accused of spreading “disinformation” and then prosecuted for unrelated crimes. In the United States, the same lawfare was attempted against President Trump, but he won anyway.

Sun Tzu’s first principle of war is “know the enemy,” but knowledge is difficult in an information war. Propaganda plants lies in every soul. In 1928, the father of modern public relations, Edward Bernay (who also happened to be the nephew of Sigmund Freud), wrote in his book Propaganda:

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. …We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society. …In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons…who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.

Propaganda works by contrasting two sides. Empathy for an opponent’s position is systematically removed by control of information. Public policy filtered through propaganda is always a binary choice.

In wartime, “us versus them” in a “kill or be killed” conflict provides the necessary binary, and propaganda writes itself. Domestic propaganda in peacetime is a bit trickier but works the same way: by staking out two sides.

Mass media permits distinct information sources. Elections award one side or the other with temporary political rule, but power is never actually surrendered. Each side simply acts as a fulcrum for the other side to pivot. Wedge issues inject urgency, but they are designed to produce a stalemate.

Keep reading

Teens’ Google Search History Helped Detectives Solve Horrific Denver Arson Murder Case

A reverse keyword search warrant served to Google helped Denver police identify three teens responsible for an arson attack that killed five members of a family in 2020.

Wired reports that in August 2020, a horrific arson attack in Denver, Colorado, claimed the lives of five members of a Senegalese family, including two children. The case initially left investigators stumped, with little evidence pointing to the perpetrators. However, a breakthrough came when Denver Police Department (DPD) detectives Neil Baker and Ernest Sandoval decided to serve a reverse keyword search warrant to Google, requesting information on users who had searched for the address of the victims’ home in the days leading up to the fire.

The warrant, which was met with initial resistance from Google due to privacy concerns, ultimately revealed that three local teenagers—Kevin Bui, Gavin Seymour, and Dillon Siebert—had repeatedly searched for the address on Google in the two weeks prior to the arson. This information, combined with cell phone location data placing the teens near the scene of the crime, provided the key evidence needed to arrest and charge them.

The case highlights the growing use of reverse keyword search warrants by law enforcement. These warrants allow police to request information on all individuals who searched for specific keywords or phrases, potentially exposing innocent people to unwarranted scrutiny.

In this instance, the teens’ defense argued that the warrant violated their Fourth Amendment rights by conducting a broad “digital dragnet” without individualized probable cause. However, the judge ruled in favor of law enforcement, likening the search to looking for a needle in a haystack.

The Colorado Supreme Court later upheld the constitutionality of the warrant in a landmark ruling, potentially paving the way for wider use of this investigative technique. However, the court also acknowledged the lack of individualized probable cause, deeming the warrant “constitutionally defective” despite allowing the evidence to stand.

Keep reading

Congressional Bill Aimed At Protecting Kids Online Could Cause Headaches For Marijuana Businesses

A newly filed bill in Congress aimed at protecting children online could create headaches for advertisers trying to promote legal marijuana and other regulated substances.

Titled the Kids Online Safety Act (KOSA), the bipartisan proposal—from Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) as well as Senate Majority Leader John Thune (R-SD) and Senate Minority Leader Chuck Schumer (D-NY)—would create a “duty of care” for online platforms such as social media and streaming video services, requiring them to take steps to prevent access to potentially sensitive content by minors.

That includes advertisements for cannabis products and certain other drugs and services.

A factsheet from Blackburn’s office says the duty of care “requires social media companies to prevent and mitigate certain harms that they know their platforms and products are causing to young users.”

The sponsors say the legislation is necessary to protect children from pernicious practices that keep “kids glued to their screens” for hours a day, alleging that “Big Tech is trying every method possible to keep them scrolling, clicking ads, and sharing every detail of their life.”

The 63-page bill “targets the harms that online platforms cause through their own product and business decisions,” the factsheet says, “like how they design their products and applications to keep kids online for as long as possible, train their algorithms to exploit vulnerabilities, and target children with advertising.”

Much of the proposal is aimed at limiting content that fuels behavioral health disorders. Platforms would need to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors,” it says, listing eating and drug use disorders, suicidal ideation, violence and harassment, sexual exploitation, financial harm and others.

As for controlled substances, online platforms would be prohibited from facilitating the “advertising of narcotic drugs, cannabis products, tobacco products, gambling, or alcohol to an individual that the covered platform knows is a minor.”

The provision around drug use lists the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol” as risks that platforms would need to actively guard minors against.

Video streaming platforms meanwhile, would be required “to employ measures that safeguard against serving advertising for narcotic drugs, cannabis products, tobacco products, gambling, or alcohol directly to the account or profile of an individual that the service knows is a minor.”

“Big Tech platforms have shown time and time again they will always prioritize their bottom line over the safety of our children, and I’ve heard too many heartbreaking stories to count from parents who have lost a child because these companies have refused to make their platforms safer by default,” Blackburn said in a press release about the legislation.

Keep reading