Australia Wants To Force Big Tech to Pay Legacy Media

The Australian government wants to take 2.25% of Meta, Google, and TikTok’s local revenue and hand it to legacy news publishers. The platforms can avoid the bill by signing commercial deals with those same publishers. Either way, money moves from the companies people actually use to read and share information, into the bank accounts of the established media class.

The draft legislation is called the News Bargaining Incentive. The word “incentive” is an odd choice. A levy you can only escape by paying a private third party is a tax with extra steps, and the third party has been chosen for you. Australian Community Media, Nine Entertainment, News Corp Australia, and the public broadcaster ABC sit at the front of the queue.

Communications Minister Anika Wells announced the plan in Sydney on Tuesday. “People are increasingly getting their news directly from Facebook, from TikTok and from Google, and we believe it’s only fair that large digital platforms contribute to the hard work of journalism that enriches their feeds and that drives their revenue,” she said.

The idea treats the act of users sharing links as a form of theft from publishers, rather than what it actually is, which is people choosing to talk about the news on the platforms where they spend their time.

Keep reading

They’re All Ears: Apple’s Plan to Read Your Mind

We’ve handed over our location, our browsing history, our voice, our face, and our purchasing habits. In exchange, we’ve gotten convenience. Now Apple wants the one thing each of us might have thought was still ours—the electrical activity of our brain. And this time, they’re not even asking. What are we talking about here?

In January 2023, Apple quietly filed patent US20230225659A1 with the U.S. Patent and Trademark Office. The filing describes a wearable electronic device—an earbud—equipped with multiple electrodes embedded directly into the ear tip and housing. These electrodes aren’t for audio. They are not there to improve our sound quality. No indeed. Instead, they are there to read our brain—using the same EEG technology doctors use to monitor neurological activity in clinical settings. And because every ear canal is shaped differently, Apple’s patent describes a machine-learning model that figures out which electrode combinations work best for each person’s specific anatomy, then keeps refining that over time. The result is a read that is accurate, continuous, and tailored to each of us personally. The digital signal is then transmitted wirelessly to our phone—and, per the patent’s own language, to a server, where it can be stored as “historic data” accessible by “another person given permission.”

Read that sentence again.

What EEG Actually Reveals
This is not science fiction, and it is worth understanding what EEG data actually captures—because it is a lot more than Apple’s marketing department will ever tell you. Brain waves are not background noise. They are a direct readout of our inner life. The alpha, beta, delta, theta, and gamma frequencies each correspond to distinct mental states—relaxation, intense focus, deep sleep, creativity, active learning. Together they paint an individual portrait of our mind that is more revealing than anything we have ever typed into a search bar or whispered to a smart speaker. These frequencies, as Loyola University researchers have noted, are also the same signals measured in polygraph tests—the ones used to determine whether someone is lying. They can reveal our stress levels, our concentration, our emotional state, and potentially flag neurological conditions that have not yet been diagnosed. As one researcher at the Neurorights Foundation put it in a Science Friday interview, neural circuits in the brain create our thoughts, emotions, memories, decision-making, and our very sense of self.

Apple wants that data streaming off our ears into their servers.

Are There Any Upsides?
Fair is fair—applications for in-ear EEG technology are being floated, and it’s worth addressing them. As Neurofounders reports, startups like NextSense are already developing in-ear EEG devices to improve clinical sleep staging. Detecting seizure disorders from continuous passive monitoring is another possibility. Early signals for degenerative diseases like Alzheimer’s may surface in EEG data years before symptoms appear. And researchers have argued that natural-environment EEG collection—on the couch, at work, during real life rather than inside a sterile lab—would produce more accurate data on attention and cognitive states than anything gathered under clinical conditions.

These applications sound compelling on the surface. But step back for a second. Americans are not sleeping poorly because they lack a brain-monitoring device. They are sleeping poorly because they are overprescribed, overstimulated, and undernourished—and the same medical system profiting from that reality is not exactly rushing to fix it. Handing our neural data to Apple is not a solution to a pharmaceutical-created problem. It is just a new layer of surveillance dressed up as fake wellness. The idea that we should surrender the electrical activity of our brains as the price of entry for better sleep tracking should raise more than a few eyebrows.

Who Gets the Data?
Here is where things get serious. A 2024 Neurorights Foundation report pulled back the curtain on 30 companies already selling consumer neurotechnology devices. What they found should stop you cold. Twenty-nine of the thirty companies claimed unlimited rights to their users’ neural data. Most had quietly written third-party data sharing directly into their terms—buried in the kind of legal language nobody reads until it’s too late. Fewer than half even encrypt the data or de-identify users. There is no federal law in the United States governing how neural data collected by consumer devices can be used or sold. A handful of states—Colorado, California, Illinois—have moved to address this, but protections remain patchwork at best.

As a published paper in PMC bluntly put it, bulk sales of neural data by tech giants to third parties may already be occurring with minimal accountability. Data brokers could soon be cataloging individual “brain fingerprints” on a mass scale—data as uniquely identifying as a fingerprint, and infinitely more revealing.

Apple has faced its own data breach history. As Pearl Cohen’s legal analysts note, the patent describes data transmission to external servers accessible by parties beyond the user. The company that couldn’t keep our FaceID data secure wants a continuous stream of our brain’s electrical activity.

Keep reading

Google Partners with the Pentagon to Sell Your Data

There has always been this convenient belief that Big Tech operates independently from government, as if the data you store, search, and upload exists in some neutral corporate space, but that illusion is breaking down rapidly as the lines between Silicon Valley and Washington disappear in real time.

Google has now entered into a classified agreement with the Pentagon allowing its artificial intelligence systems to be used for “any lawful government purpose,” which is a phrase that sounds benign until you understand what it actually means in practice.

This is not a narrow contract tied to a single project. It opens the door for integration into mission planning, intelligence analysis, and even weapons targeting systems operating on classified networks, and once those systems are embedded, the distinction between commercial technology and state infrastructure effectively disappears.

At the same time, Google does not retain control over how that technology is ultimately used, because under the terms being reported, the company has no ability to veto lawful government operations, meaning once access is granted, the downstream application is no longer in their hands.  Please be reminded that Google has been collecting data on everyone and everything for decades: Google Maps, Google Search, Google Photos, Google Drive, Gmail, etc.

This is where the narrative people have been told begins to collapse, because for years the assumption was that your data sat within a corporate ecosystem governed by terms of service and internal policies, yet what is now being constructed is something entirely different, a shared infrastructure where private data, artificial intelligence, and state power intersect.

Keep reading

DOJ Sues Big Tech Giant Cloudera for Blocking American Workers in Favor of Foreign Visa Holders, Same Company Sued Trump in 2017 Over Refugee Ban

The Trump Department of Justice has filed a federal lawsuit against Cloudera Inc., a major Silicon Valley tech company, for deliberately discriminating against qualified American workers in favor of foreign visa holders for high-paying tech positions.

The Civil Rights Division’s lawsuit accuses Cloudera of violating the Immigration and Nationality Act by creating a sham, separate hiring process designed to deter and exclude American citizens while fast-tracking foreign workers on temporary visas.

“Employers cannot use the PERM sponsorship process as a backdoor for discriminating against U.S. workers,” Assistant Attorney General Harmeet K. Dhillon of the Justice Department’s Civil Rights Division said. “The Division will not hesitate to sue companies who intentionally deter U.S. workers from applying to American jobs.”

According to the DOJ, Cloudera set up a dedicated email address for job applications that was deliberately configured not to accept emails from outside the company.

American workers who followed the company’s posted instructions received automatic bounce-back messages stating that their applications could not be processed.

The DOJ said in a press release:

The complaint alleges Cloudera intentionally created a separate recruitment and hiring process to deter U.S. workers from applying, and also did not consider them, for lucrative technology jobs that the company earmarked for people with temporary employment visas. Cloudera created an email account that did not allow external emails, but still instructed applicants to use that unworkable email address to apply for jobs. The Division received a charge of employment discrimination from one U.S. worker who tried to apply using the email account Cloudera set up, but received a bounce back notification. When sponsoring current employees under the permanent labor certification program (PERM), Cloudera purposely failed to recruit U.S. workers in good faith.

At the same time, Cloudera was actively sponsoring foreign workers for permanent residency through the Department of Labor’s PERM program, a process that legally requires employers to make a genuine good-faith effort to recruit and hire qualified American workers first.

Keep reading

US Supreme Court weighs claims Cisco aided Chinese human rights abuses

The U.S. Supreme Court confronted a case on Tuesday with broad implications for human rights litigation in American courts, a long-running lawsuit brought by members of the Falun Gong spiritual movement who have accused Cisco Systems of facilitating religious persecution in China.

The justices heard arguments in Cisco’s appeal of a lower court’s 2023 ruling that breathed new life into the 2011 lawsuit, brought under the Alien Tort Statute of 1789, that accused the company of knowingly developing technology that allowed China’s government to surveil and persecute Falun Gong members.

The court has a 6-3 conservative majority, and some of its conservative justices signaled agreement with the stance taken by Kannon Shanmugam, the lawyer for Cisco, during the arguments.

San Jose, California-based Cisco urged the Supreme Court to further limit the scope of the Alien Tort Statute, which lets non-U.S. citizens seek damages in American courts for violations of international law. The court in a series of decisions since 2013 has restricted the law’s reach, making it more difficult to hold U.S. corporations legally liable for human rights abuses.

President Donald Trump’s administration sided with Cisco in the case.

Paul Hoffman, a lawyer for the Falun Gong plaintiffs, argued strenuously against Cisco’s views.

“Under Cisco’s theory, even the corporate actors who provided the poison gas for Nazi crematoria would not be liable” under the Alien Tort Statute, Hoffman told the justices.

Keep reading

Meta Inks Deal For Solar Power At Night, Beamed From Space

The race to secure electricity for AI models has reached new heights: Meta has signed an agreement with the startup Overview Energy that could see a thousand satellites beam infrared light to solar farms that power data centers at night.

In 2024, Meta’s data centers used more than 18,000 gigawatt-hours of electricity — roughly enough to power more than 1.7 million American homes for a year — and its need for compute power is only increasing. The company has committed to building 30 gigawatts of renewable power sources, with a focus on industrial-scale solar power plants.

Typically, data centers turning to solar power must either invest in battery storage or rely on other generation sources to operate at night.

Overview Energy, a four-year-old, Ashburn, Virginia, outfit that emerged from stealth in December, has a different solution: The company is developing spacecraft that collect plentiful solar power in space. It then plans to convert that energy to near-infrared light and beam it at sufficiently large solar farms — on the order of hundreds of megawatts — which can convert that light to electricity.

By using a wide, infrared beam to power existing terrestrial solar infrastructure, Overview thinks it can sidestep the technological challenges and safety and regulatory issues that bedevil plans to transmit power to Earth through high-power lasers or microwave beams. CEO Marc Berte says you’ll be able to stare right into his satellite’s beam with no ill effects.

The technology would increase the return on investment from building solar farms and reduce reliance on fossil fuels — if it can be deployed at scale.

Overview says it has already demonstrated power transmission to the ground from an aircraft, and is planning to launch a satellite to low Earth orbit in January 2028 to perform its first power transmission from space.

Keep reading

Harmeet Dhillon Announces DOJ’s Big Win Defending xAI from Colorado DEI Law

Assistant Attorney General for Civil Rights Harmeet Dhillon announced a major win for American artificial intelligence (AI) dominance after her department intervened in a lawsuit challenging a new Colorado law that prohibits “algorithmic discrimination” during an interview on Breitbart News Saturday.

Speaking with Breitbart News political editor Bradley Jaye, Dhillon revealed details on the Department of Justice’s (DOJ) recent success at getting the state of Colorado to agree not to enforce SB24-205, which requires AI developers and deployers to satisfy certain disclosure, reporting, and prevention requirements when creating algorithm products designed for services like mortgage lending, student admissions, and job-candidate selection. 

The bill’s text included an explicit carveout for discriminatory algorithms designed to advance “diversity” or “redress historic discrimination,” and AI company xAI filed a lawsuit against the statute on April 9, alleging it is unconstitutional.

Marking the first time that the DOJ has intervened in a case challenging state regulations on AI, Dhillon’s team joined the case on behalf of xAI on Friday. Together, they argued that “embedding AI with state-mandated discrimination is a recipe for disaster.”

Emphasizing that the Civil Rights Division at the DOJ is meant to “protect American citizens, and even American companies, from discrimination on the basis of impermissible racial, gender, et cetera criteria,” Dhillon told Jaye that Colorado had attempted to require companies and municipalities to “look at outcomes and then racially balance and adjust their algorithms to produce outcomes that reflect the demographic population.”

“This is not required by law. In fact, it’s prohibited by federal law,” she stated. “And you know, worse, the statute actually carved out if people or companies are doing discrimination to remedy past discrimination, that’s okay. All of this is just nonsense, and it stifles innovation, and it’s illegal under the equal protection clause of the Fourteenth Amendment.”

In addition to violating the Fourteenth Amendment, Dhillon noted that xAI also has First Amendment arguments against the bill, “because, effectively, the state is compelling it to utter certain speech in furtherance of these DEI goals.”

“We’re not arguing that because the government doesn’t have that obligation, but we’re stepping in to protect American citizens and American companies,” she explained, before revealing the success of her efforts on Friday. 

“We had a great result yesterday,” Dhillon announced, recounting how Colorado “agreed to not enforce the law against xAI” within just a couple of hours of the DOJ intervening. 

“And by the evening, before we went to bed, we had Colorado agree to not enforce it against anybody until they send it back to the legislature to fix it,” she explained. “So it’s pretty much a total win for American consumers and companies, and the first instance of the United States Department of Justice stepping in on an AI case to really protect this innovation and protect Americans from discrimination by AI algorithmic manipulation.”

Highlighting why civil rights work should be “important” to people on the right side of the political aisle, Dhillon told Jaye that conservatives “have come to look at civil rights as something that’s been weaponized against Americans, but civil rights are for all Americans.”

“So what we’re doing in the Department of Justice Civil Rights Division is exactly that — we’re standing up for all Americans, like in this xAI case.”

Keep reading

China Blocks Meta’s $2 Billion Acquisition of AI Startup Manus

The Chinese government has officially blocked Meta’s planned $2 billion acquisition of Manus, a Chinese-founded AI startup, marking a significant escalation in the ongoing technological rivalry between the United States and China.

CNBC reports that China’s National Development and Reform Commission (NDRC) foreign investment review arm issued a decision on Monday to block the sale of Manus to Mark Zuckerberg’s Meta. The regulatory body ordered all parties involved in the transaction to unwind the acquisition, effectively terminating the deal that was announced in late December.

Manus emerged as a prominent player in the AI sector when it launched in March of last year with an AI agent designed to autonomously perform complex tasks. These capabilities include writing research reports, preparing presentation slides, and building websites. The launch garnered significant attention from Chinese state media, which celebrated it as the country’s latest breakthrough AI product. This recognition came on the heels of Deepseek’s AI model launch, which had previously caused substantial fluctuations in major United States technology stocks.

Early versions of Manus were developed by Beijing Butterfly Effect Technology, a Chinese startup founded in 2022, according to the Wall Street Journal. Following its launch, the AI company made a strategic decision to relocate its headquarters and top engineers from Beijing to Singapore. This move aligned with a broader trend among Chinese AI firms seeking to navigate the complex geopolitical landscape between the United States and China. By establishing operations in Singapore, these companies believe they can circumvent some of the tensions between the two superpowers while gaining access to Western AI models and potential investors.

According to the Financial Times, the NDRC had initially approved Manus’ relocation to Singapore. However, complications arose when Meta and the startup failed to inform Chinese authorities before finalizing their acquisition agreement in December. This appears to have triggered the subsequent regulatory scrutiny and ultimate rejection of the deal.

The Chinese government’s response to the Meta-Manus transaction was swift and decisive. In January, mere days after the two companies publicly announced the acquisition, Chinese officials launched an investigation into potential national security concerns and possible export control violations. The probe intensified last month when the NDRC reportedly summoned the startup’s co-founders, Xiao Hong and Ji Yichao, to meet with its officials to discuss the acquisition details. Both co-founders were subsequently instructed not to leave China until the regulatory review concluded.

In a statement to Breitbart News, a Meta spokesperson wrote: “The transaction complied fully with applicable law. We anticipate an appropriate resolution to the inquiry.”

This regulatory intervention occurs against a backdrop of heightened tensions between Washington and Beijing over advanced AI technologies. The timing is particularly notable as it comes just weeks before President Donald Trump is scheduled to visit Beijing for a summit meeting with Chinese President Xi Jinping. The upcoming meeting takes place amid an ongoing trade war and escalating geopolitical tensions between the world’s two largest economies, with artificial intelligence emerging as a central battleground.

Keep reading

Silicon Valley has forgotten what normal people want

One of the most mortifying things about knowing a lot of techies is listening to them tell me excitedly about some very important discovery that they believe they have made. Recently, I ran into an acquaintance of mine, who began talking my ear off about an amazing discovery he’d made with LLMs. Knowledge, it turns out, is structured into language! You could put one word into ChatGPT and it might understand what you wanted, or make up a word and see if it understood what you meant! These amazing new tools have revealed that the English corpus contains so much about its speakers!

He concluded that LLMs are a discovery on par with writing.

Regular humans hit on this idea about a century ago; my most generous interpretation of what he was telling me was that he’d hit on a kind of naive, confused version of Structuralism; Saussure via a game of telephone. (There has been recent work on a similar point, which argues that one needs to understand LLMs via literary theory, but it starts with Saussure.) I tried to get out of the conversation as quickly as I could, not least because he seemed frustrated that I didn’t see things exactly as he did — a new behavior and likely a symptom of LLM overuse.

Not every discovery that’s new to you is actually new. For instance, there’s Elon Musk marvelling at the complexity of hands; I could point to a variety of disciplines for which this is 101-level stuff: artists, who have to figure out how to draw them; surgeons, who have to figure out how to operate on them; musicians and magicians, who rely on extremely fine motor skill to produce their work; neuroscientists and psychologists, who doubtless encountered the cortical homunculus early in their careers. Or Palmer Luckey claiming that “no one has done a postmortem” on the One Laptop Per Child computing project — because he didn’t know there’s a whole book about it called The Charisma Machine.

At its most absurd nadir, one is reminded of Juicero, a company that sold a $400 juicer that did the same work as squeezing its proprietary juice packs with one’s bare hands.

Look, discovering something that’s new to you is exciting — ask anyone who listened to me yell about the joys of European (higher-fat) butter — but you can’t take for granted that something that’s new to you is new to everyone. These things have in common a certain incuriosity that I have found endemic among a certain kind of tech enthusiast, particularly the ones who are most interested in startups and entrepreneurship. Perhaps they have been so siloed that they did not realize their “discovery” was well -known elsewhere, or perhaps their self-conception is that they are the smartest, and if they don’t know something, no one knows it.

Keep reading

Apple, Google Caught ‘Helping Users’ Find Apps That Can Deepfake Nude Pictures of Real People, and Worse Kids Are Vulnerable Too

Tech companies Apple and Google were found to have been leading users — specifically children — to apps that could effectively pornify images through artificial intelligence.

Last Wednesday, 9to5Mac reported the findings from January published by the Tech Transparency Project, which concluded both the Apple App Store and Google Play “are helping users to find apps that create deepfake nude images of women.”

The stores were even found promoting these apps and autocompleting search results for them.

About 40 percent of the top 10 apps appearing in searches for “nudify,” “undress,” and “deepnude” could “render women nude or scantily clad.”

These are apps where users can take two different images — one normal and one sexually explicit — and generate an image where components of both are used, sexualizing the person from the normal one.

9to5Mac reached out to the developer for one of these apps, and were told they “had no idea it was capable of producing such extreme content.”

On Thursday, Apple responded to the outlet, saying the apps were not allowed on their store given their review guidelines prohibit sexual content.

The company said it has removed 15 apps, with others receiving notice they will be removed if they continue to be in violation.

In January, California Democratic Gov. Gavin Newsom went after social media platform X with a similar allegation.

“xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” he said.

“I am calling on the Attorney General to immediately investigate the company and hold xAI accountable.”

Keep reading