Australia: Meta begins deactivating accounts ahead of 16-year-old minimum social media age limit

Meta has begun removing social media accounts belonging to Australian children under 16 years old from its platforms, Instagram, Facebook and Threads.

The tech giant has started notifying users aged 13 to 15 years old that their accounts would cease to exist on December 4th. Starting December 10th, social media companies will face fines up to A$49.5 million ($33million USD) should they fail to take steps to halt children under 16 years old from owning accounts.

Australian eSafety Commissioners will send major platforms notices on December 11th demanding statistics about exactly how many accounts were removed from their sites. Additionally, monthly notices are planned for 2026.

It is estimated that 150,000 Facebook accounts and 325,000 Instagram users will be terminated. 

“The government recognizes that age assurance may require several days or weeks to complete fairly and accurately,” Communications Minister Anika Wells reported.

“However, if eSafety identifies systemic breaches of the law, the platforms will face fines,” she added.

Google sent out a notice on Wednesday stating that anyone in Australia under 16 would be signed out of YouTube on December 10th and will lose features accessible available only to account holders, such as playlists.

Google states it determines YouTube users’ ages “based on personal data contained in associated Google accounts and other signals.”

“We have consistently said this rushed legislation misunderstands our platform, the way young Australians use it and, most importantly, it does not fulfill its promise to make kids safer online,” a Google statement reported.

Users over 16 years old who were wrongfully revoked account access have the option to verify their age through government-issued ID or a Video selfie, per Meta.

Platforms such as X and Reddit contacted underage users, suggesting that they download their posted pictures and freeze their accounts until they become of age.

The Australian government claims the ban will protect children from the harms of social media. However, critics say this decision may isolate certain groups who depend on the platforms for connection and push children to other, potentially more harmful corners of the internet.

Keep reading

YouTube says it will comply with Australia’s teen social media ban

Google’s YouTube shared a “disappointing update” to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.

The decision ends a stand-off between the internet giant and the Australian government which initially exempted YouTube from the age restriction, citing its use for educational purposes. Google (GOOGL.O) had said it was getting legal advice about how to respond to being included.

“Viewers must now be 16 or older to sign into YouTube,” the company said in a statement.

“This is a disappointing update to share. This law will not fulfill its promise to make kids safer online and will, in fact, make Australian kids less safe on YouTube.”

The Australian ban is being closely watched by other jurisdictions considering similar age-based measures, setting up a potential global precedent for how the mostly U.S. tech giants behind the biggest platforms balance child safety with access to digital services.

The Australian government says the measure responds to mounting evidence that platforms are failing to do enough to protect children from harmful content.

Keep reading

Influencer X accounts try to defend their US patriotism, despite having never set foot in the country

Last week social media platform X revealed the national origins of all its user accounts – divulging many top political voices on hot-button US issues are actually keyboard warriors based in Africa and Asia.

For many, such as fake Native American grievance accounts run from Bangladesh and Nigerians posing as Trump-loving Midwestern moms, their motivation is simple – trying to make money (usually from selling T-shirts).

For others it’s more complicated, such as Ian Miles Cheong, a Malaysian-born, Dubai-based writer and X celebrity with 1.2 million followers.

He’s built his brand on acerbic social criticism and championing the new right in US politics, but says it was all on his followers for assuming he was actually in the country.

The idea that you can’t have a say on anything regarding America just because you don’t live there is kind of silly because what happens in America happens everywhere else,” Cheong, 40, told The Post.

“On top of that, practically every country has a US military base at this point. It’s an empire, like it or not, and people are going to have opinions.”

Cheong became the target of attacks once it was revealed he is actually in Dubai.

“You’ve never set foot in America and yet you spend every day trying to influence our culture and politics. You talk about our country exclusively and never say a word about your own.

“If you don’t see why that might rub Americans the wrong way, I don’t know what to tell you,” one prominent American podcaster wrote to him.

Keep reading

Trump Slammed Biden’s $52 Billion CHIPS Act. Then He Used It To Buy a Federal Stake in Intel.

In March, President Donald Trump blasted the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act of 2022. He called it “a horrible, horrible thing.” Passed under President Joe Biden, the CHIPS Act was essentially a $52 billion industrial policy slush fund intended primarily to bolster domestic production of computer chips.

When the law passed in 2022, the Biden administration said it was a “smart investment” that would “strengthen American manufacturing, supply chains, and national security, and invest in research and development, science and technology” while bringing thousands of “good-paying manufacturing jobs back home.”

There was never much reason to believe in the previous administration’s industrial policy boosterism. Early grants largely went either to factories that were already in development and would have been built anyway or to facilities of questionable economic value that might not be completed even with the additional taxpayer funding.

So Trump was on solid ground when he told Congress, “You should get rid of the CHIPS Act, and whatever’s left over…you should use it to reduce debt, or any other reason you want to.” Yet in the months since, Trump has made use of CHIPS funding not to reduce the debt, but to pursue his own questionable industrial policy. His version is even less accountable and may well be even worse for taxpayers.

Among the recipients of CHIPS funding was computer chipmaker Intel, which was set to receive $11 billion to help fund the construction of semiconductor fabs in several states. By late summer, the company said it had already received more than $5 billion of the funds. But Intel struggled to fulfill those commitments, falling behind on factory construction in some places and laying off workers as it suffered from ongoing financial and managerial problems. By the middle of 2025, Intel looked very much like a failing business.

In theory, the CHIPS Act provided a mechanism for the federal government to retract the grant and get all or part of its money back should Intel fail to meet its obligations. It’s not clear whether the federal government would have exercised its option to take the money back, but it was an option—until Trump stepped in.

As the company flailed, Trump met with its CEO, Lip-Bu Tan. Trump first called for him to resign. Then in August, the Trump administration announced that the federal government would just take partial ownership of Intel. Essentially, the U.S. government would purchase a roughly 10 percent stake in the chipmaker, partially nationalizing the company. And funds from CHIPS would be used to do it.

Trump bragged about the deal, saying he planned to “do more of them.” The company’s stock price rose on the news, suggesting that investors liked it. But that’s probably because it was a good deal for the company, at taxpayer expense.

According to public financial filings, the federal government would disburse the remaining funds, about $6 billion, while clearing any obligations for the company to actually complete work on new domestic semiconductor fabs.

In exchange, the federal government would gain partial ownership—as well as all the financial risks stockholders usually have when they invest in companies. Those risks will now be borne by taxpayers. As Carnegie Endowment fellow Peter Harrell pointed out in a social media post, the move came with “a lot of downside risk.”

Fundamentally, Trump gave Intel a federal bailout, removing the company’s public obligations and accountability while loading more financial risk onto the public.

Keep reading

Armed Robber Targets Sam Altman’s Ex-Boyfriend’s House, Forces Transfer Of $11 Million In Crypto

A thief barged into a house owned by Lachy Groom – a wealthy tech investor who once dated OpenAI CEO Sam Altman, tied up a victim, and made off with $11 million in Crypto Saturday evening in San Francisco, the NY Post reports.

Dressed as a delivery worker, the armed robber rang the door at Groom’s $4.4 million home on Dorland Street while carrying a white box, asks for Joshua – who lives with Groom – while claiming to be a UPS driver. The victim answers the door and identifies himself as Joshua. 

The thief then asked for him to sign for the package – asking if he can borrow a pen. The suspect then followed Joshua inside when a loud bang can be heard

According to the report, the suspect pulled a gun, tied up the victim with duct tape, and then stole $11 million worth of Ethereum and Bitcoin (exact method unknown), in what is believed to have been a hit by an organized crime group that the suspect was part of.

The suspect then tortured the victim, beating him while he held a phone up on loudspeaker as foreign voices on the line repeated his personal information that they had obtained. The thief then poured liquid on the victim before the crypto wallets were emptied.

The whole thing took around 90 minutes. 

Homeowner Lachy Groom, 31, is a venture capitalist and the ex-boyfriend of Open AI’s Altman, 40, who dated the billionaire sometime before he got married in 2024, sources with knowledge of their relationship said. Groom bought the property from Altman’s brother in 2021 for $1.8 million, property records show. Details of their relationship have not previously been reported. Attempts to reach Groom were not returned.

The Post has learned Joshua is a fellow tech investor who lives with Groom at the 4-bedroom Dorland Street home. 

Altman and Groom have invested together in various companies. Groom, a native Australian, has founded four startups and sold three before he turned 18. 

Keep reading

Federal Uniformity Sounds Good – Until Big Tech Writes the Rules

Big Tech is jamming preemption of state AI laws into the National Defense Authorization Act (NDAA) at the last minute, but it is unclear if they are fully aware of what it would actually mean. If Congress prohibits states from implementing AI policies that would protect their citizens, it would have far-reaching consequences. True conservatives in Congress must uphold their pro-American values and refuse to support any preemption effort that would ultimately be a coronation of Big Tech as our country’s new rulers.

The United States is the dominant leader in AI on the global stage, and we are in a high-stakes race with adversaries – especially China – to maintain that advantage. We cannot afford to cut corners on oversight and safety while Big Tech develops AI systems at a rapid pace. States are best positioned to test thoughtful safeguards that address the most pressing concerns – from public safety to protecting children. The federal government, by contrast, is lagging behind.

States have been laboratories of democracy on every pressing issue of our time. The issue of AI should not be any different. The federal government is behind the states in terms of simply thinking about the ramifications of AI, and Congress should allow the states to try to find effective policy solutions that cater to our most imminent concerns.

Preemption is a clear violation of the principle of federalism inherent in the 10th Amendment to the Constitution.

Additionally, this provision is a blatant cover for Big Tech. It allows Big Tech to continue to exploit kids, creators, and conservatives. This provision will not empower small businesses and entrepreneurs in AI because they simply don’t have $40 billion in funding to put toward artificial general intelligence (AGI) development and $100 million bonuses to hand out to potential employees.

They are already shut out of the industry by people like OpenAI CEO Sam Altman, who popularized the “patchwork” characterization of state policies that is now being used in smaller circles in support of preemption.

If we intend to outpace China on AI, we must abandon misguided proposals that undermine federalism. The federal government should focus on enacting strong, strategic measures that protect our national security and prevent U.S. technologies and advanced chips from ending up in the wrong hands.

Keep reading

GrapheneOS Quits France, Citing Unsafe Climate for Open Source Tech

GrapheneOS, the privacy-focused Android operating system, has ended all operations in France, saying the country is no longer a safe place for open source privacy projects.

Although French users will still be able to install and use the software, the project is moving every related service, including its website, forums, and discussion servers, outside French territory.

Until now, GrapheneOS used OVH Bearharnois, a hosting provider based in France, for some of its infrastructure. That setup is being dismantled.

The Mastodon, Discourse, and Matrix servers will operate from Toronto on a mix of local and shared systems. These changes are designed to remove any dependency on French service providers.

The developers said their systems do not collect or retain confidential user data and that no critical security infrastructure was ever stored in France. Because of that, the migration will not affect features such as update verification, digital signature checks, or downgrade protection.

The decision also applies to travel and work policies. Team members have been told not to enter France, citing both personal safety concerns and the government’s endorsement of the European Union’s Chat Control proposal.

That measure would allow authorities to scan private communications for illegal material, something privacy developers see as incompatible with secure digital design.

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

Google and US government battle over the future of internet advertising

Google will confront the U.S. government’s latest attempt to topple its internet empire in federal court on Friday as a judge considers how to prevent the abusive tactics that culminated in parts of its digital ad network being branded as an illegal monopoly.

The courtroom showdown in Alexandria, Virginia, will pit lawyers from Google and the U.S. Department of Justice against each other in closing proceedings focused on the complex technology that distributes millions of digital ads across the internet each day.

After a lengthy trial last year, U.S. District Judge Leonie Brinkema ruled in April that pieces of Google’s ad technology had been rigged in a way that made it an illegal monopoly. That set up another 11-day trial earlier this fall to help Brinkema determine how to remedy its anti-competitive practices.

Friday’s closing arguments will give both Google and the Justice Department a final chance to sway Brinkema before she issues a ruling that probably won’t come until early next year.

The Justice Department wants Brinkema to force Google to sell some of the ad technology that it has spent nearly 20 years assembling, contending a breakup is the only way to rein in a company that the agency’s lawyers condemned as a “recidivist monopolist” in filings leading up to Friday’s hearing.

The condemnation refers not only to Google’s practices in digital advertising but also to the illegal monopoly that it unleashed through its dominant search engine. Federal prosecutors also sought a breakup in the search monopoly case, but the judge handling that issue rejected a proposal that would have required Google to sell its popular Chrome web browser.

Although Google is still being ordered to make reforms that it’s resisting, the outcome in the search monopoly case has been widely seen as a proverbial slap on the wrist. The belief that Google got off easy in the search case is the main reason the market value of its parent company Alphabet surged by about $950 billion, or 37%, to nearly $3.5 trillion since U.S. District Judge Amit Mehta’s decision came out in early September.

That setback hasn’t discouraged the Justice Department from arguing for a breakup of an ad tech system that handles 55 million requests per second, according to estimates provided by Google in court filings.

The huge volume of digital ads priced and distributed through Google’s technology is one of the main reasons that the company’s lawyers contend it would be too risky to force a dismantling of the intricate system.

“This is technology that absolutely has to keep working for consumers,” Google argues in documents leading up to Friday’s hearing. The company’s lawyers blasted the Justice Department’s proposal as a package of “legally unprecedented and unsupported divestitures.”

Besides arguing that its own proposed changes will bring more price transparency and foster more competition, Google is also citing market upheaval triggered by artificial intelligence as another reason for the judge to proceed cautiously with her decision.

In his decision in the search monopoly case, Mehta reasoned that AI was already posing more competition to Google.

Keep reading

Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading