Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading

Pentagon Awards Contracts To 4 Artificial Intelligence Developers

The U.S. Department of Defense announced on July 14 that it has awarded contracts to four U.S.-based artificial intelligence (AI) developers to address national security challenges.

Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.

The office said these four companies would help “develop agentic AI workflows across a variety of mission areas.”

“Agentic AI” refers to systems designed to operate with minimal human input.

Formed in 2021, the Chief Digital and Artificial Intelligence Office is responsible for speeding up the military’s adoption of AI systems.

OpenAI was the first of the four contract awardees to announce its contract with the Chief Digital and Artificial Intelligence Office.

In June, the company unveiled “OpenAI for Government” and said its first partnership under the new initiative would help the Pentagon office identify and prototype new AI tools for administrative operations.

Anthropic has developed the Claude family of AI chatbots.

In June, Anthropic announced the development of custom “Claude Gov” models intended for national security clients.

The company said agencies operating at the highest level of the U.S. national security sector are already using these AI models.

Formed by billionaire entrepreneur Elon Musk in 2023, xAI serves as a parent to X Corp., which operates the social media platform X. Among its services, xAI has developed the Grok AI chatbot.

On July 14, xAI announced “Grok for Government” and confirmed that the service holds contracts with the Department of Defense and the U.S. General Services Administration.

Google Public Sector Vice President Jim Kelly said in a July 14 blog post that the new AI announcement with the Department of Defense would build on a long-standing partnership between Google and the U.S. military.

Kelly said his company would give the military access to its Cloud Tensor Processing Units, which power Google’s current AI applications.

Keep reading

Google finds custom backdoor being installed on SonicWall network devices

Researchers from the Google Threat Intelligence Group said that hackers are compromising SonicWall Secure Mobile Access (SMA) appliances, which sit at the edge of enterprise networks and manage and secure access by mobile devices.

The targeted devices are end of life, meaning they no longer receive regular updates for stability and security. Despite the status, many organizations continue to rely on them. That has left them prime targets by UNC6148, the name Google has given to the unknown hacking group.

“GTIG recommends that all organizations with SMA appliances perform analysis to determine if they have been compromised,” a report published Wednesday said, using the abbreviation for Google Threat Intelligence Group. “Organizations should acquire disk images for forensic analysis to avoid interference from the rootkit anti-forensic capabilities. Organizations may need to engage with SonicWall to capture disk images from physical appliances.”

Lacking specifics

Many key details remain unknown. For one thing, the attacks are exploiting leaked local administrator credentials on the targeted devices, and so far, no one knows how the credentials were obtained. It’s also not known what vulnerabilities UNC6148 is exploiting. It’s also unclear precisely what the attackers are doing after they take control of a device.

The lack of details is largely the result of the functioning on Overstep, the name of custom backdoor malware UNC6148 is installing after initial compromise of the devices. Overstep allows the attackers to selectively remove log entries, a technique that is hindering forensic investigation. Wednesday’s report also posits that the attackers may be armed with a zero-day exploit, meaning it targets a vulnerability that’s currently publicly unknown. Possible vulnerabilities UNC6148 may be exploiting include:

  • CVE-2021-20038: An unauthenticated remote code execution made possible by a memory corruption vulnerability.
  • CVE-2024-38475: An unauthenticated path traversal vulnerability in Apache HTTP Server, which is present in the SMA 100. It can be exploited to extract two separate SQLite databases that store user account credentials, session tokens, and seed values for generating one-time passwords.
  • CVE-2021-20035: An authenticated remote code execution vulnerability. Security firm Arctic Wolf and SonicWall reported in April that this vulnerability was under active exploitation.
  • CVE-2021-20039: An authenticated remote code execution vulnerability. There have been reports that this vulnerability was under active exploitation to install ransomware in 2024.
  • CVE-2025-32819: An authenticated file deletion vulnerability that can be exploited to cause a targeted device to revert the built-in administrator credentials to a password so that attackers can gain administrator access.

Keep reading

Robots gain human trust with the right kind of eye contact

As humans and robots interact more, eye contact plays a powerful role. It’s more than a gesture – it shapes how we connect and communicate. And as we age, science may offer new ways to keep those social instincts sharp.

A new study reveals something surprising: not just eye contact, but how and when we make it shapes communication. This discovery matters for humans, robots, and the spaces where both interact.

The research was led by Dr. Nathan Caruana, a cognitive neuroscientist in the HAVIC Lab at Flinders University.

Dr. Caruana’s team observed how people communicated during tasks with a virtual partner. The goal was to uncover patterns in gaze that signaled help or intent.

The researchers asked 137 participants to perform a block-building task with a virtual partner. Throughout the task, the experts watched eye movements closely. They studied not just where people looked, but why, when, and in what order.

Keep reading

U.S. SCHOOLS use ILLEGAL SURVEILLANCE TOOLS to monitor students’ digital behavior without consent or knowledge

A newly published peer-reviewed study reveals that a growing number of U.S. schools are using government-funded online surveillance tools to monitor students’ digital behavior—often without their knowledge or consent—and warns that such practices may have serious consequences for children’s development and well-being.

  • 24/7 Student Surveillance Raises Privacy and Health Concerns: A peer-reviewed study found that 12 out of 14 school surveillance companies monitored students’ social media, emails, and online activity around the clock, often without clear consent from parents or students, potentially harming children’s learning, mental health, and social development.
  • Heavy Reliance on AI and Lack of Human Oversight: Most companies used AI to flag student behavior, but fewer than half had human reviewers. Researchers warned this could lead to false positives and discriminatory outcomes, particularly for marginalized students, due to algorithmic bias and lack of transparency.
  • Federal Funds Fuel Poorly Regulated Surveillance Tools: Many schools use federal education grants to fund these surveillance tools, despite limited evidence that they improve student safety. Researchers called for better oversight and questioned whether this is an appropriate use of government resources.
  • Parents Left in the Dark and Policymakers Urged to Act: The study highlighted that parents often don’t know their children are being monitored and may not have opt-out options. Authors recommended federal legislation to improve transparency, address AI bias, and require parental consent for off-campus monitoring.

Keep reading

Who’s Programming the AI, Mr. President?

President Trump’s new legislative centerpiece—the One Big Beautiful Bill—has a promising name and some compelling goals: reduce government bloat, streamline agencies, and modernize operations with cutting-edge technology.

But there’s a problem. A big one.

No one will tell us who’s programming the AI.

This sweeping bill includes a 10-year moratorium on any state or local government regulating artificial intelligence. According to The Washington Post and AP, more than 60 existing state-level laws will be overridden if this provision passes. All regulatory authority over AI—including systems that will be used in law enforcement, healthcare, defense, and finance—will be centralized in the federal government for a decade.

Even worse? The bill empowers the Department of Commerce to deploy “commercial AI” across virtually every federal agency—from the IRS to Homeland Security—according to Indian Express and The Verge.

And yet, no one in the White House or Congress has revealed who is writing the AI code, what datasets it’s trained on, whether it can be independently audited, or whether it’s bound by the U.S. Constitution.

This isn’t just a transparency issue. This is a constitutional crisis in the making.

To be clear, President Trump’s instincts here may be sound. We’ve long needed to shrink the federal leviathan and replace unconstitutional bureaucracies with systems that serve the people—not special interests.

But good intentions won’t protect us from unseen programmers, black-box algorithms, and unaccountable automation.

This bill mandates AI integration across government “to improve efficiency and security.” But efficiency isn’t liberty. Security isn’t sovereignty. And no AI—no matter how “smart”—should be allowed to rewrite, ignore, or reinterpret constitutional rights.

According to Business Insider, the AI moratorium’s stated goal is to “foster innovation” and avoid a “fragmented regulatory landscape.” In reality, it strips states of their authority to protect their citizens from deepfakes, algorithmic bias, digital censorship, and mass surveillance.

This is not governance. This is outsourced tyranny, hidden under the guise of modernization.

Keep reading

Drone Attack Shuts Down Oil Field Run by US Company in Iraqi Kurdistan

A drone attack in Iraqi Kurdistan on Tuesday suspended operations at an oil field operated by a US company, marking the latest in a series of attacks in the region.

HKN Energy, the US firm operating the Sarsang oil field, reported an explosion at 7:00 am local time, followed by a fire. “Operations at the affected facility have been suspended until the site is secured,” the company said.

Workers at the oil field told Rudaw that it was targeted by a drone, and the Kurdistan Regional Government (KRG) denounced the attack as “an act of terrorism against the Kurdistan Region’s vital economic infrastructure.” The US Embassy in Iraq also denounced the attack.

A day earlier, two drones targeted a different oil field in the area, and another was intercepted at the Erbil airport, which houses US troops. The airport has come under attack several times in recent weeks, and so far, there have been no casualties.

No group has taken responsibility for the spate of drone attacks. The KRG has blamed the Popular Mobilization Forces (PMF), a coalition of Iraqi Shia militias that are part of the Iraqi government’s security forces, but Baghdad has denied the accusation.

PMF-affiliated militias have been responsible for previous drone and rocket attacks on US bases in Iraq and Syria, including the more than 100 attacks that occurred in the wake of the October 7, 2023, Hamas attack on Israel, as groups were targeting the US over its support for Israel’s genocidal war in Gaza. The attacks were claimed by a PMF-affiliated group that calls itself the Islamic Resistance in Iraq.

Those attacks culminated in the January 2024 attack on Tower 22, a secretive US base in Jordan near the Syrian border, which killed three US Army Reserve soldiers and wounded dozens of National Guard members. The US launched major airstrikes against the PMF in response, killing 40 people, and assassinated a high-level commander in Kataib Hezbollah, one of the main Iran-aligned militias.

Keep reading

Trump heads to Pittsburgh to announce $70 BILLION AI, energy investment with Sen Dave McCormick

President Donald Trump andSen. Dave McCormick (R-PA) will team up in Pittsburgh on Tuesday to announce a $70 billion investment in AI and energy for the state. The push will involve thousands of new jobs for Pennsylvanians.

The annoucement planned for Tuesday will come during the inaugural Pennsylvania Energy and Innovation Summit in Pittsburgh, and will “bring together the world’s top leaders in energy and AI, alongside the biggest global investors, labor and trades leaders, and government officials, to showcase Pennsylvania’s incredible potential to power the AI revolution,” a press release from McCormick’s office stated.

McCormick will be joined by Trump, leaders in energy, and others at Carnegie Mellon University for the announcement. The $70 billion is the “largest investment commitment in these industries in terms of dollars for the state and jobs created in the history of the Commonwealth,” McCormick said, per Axios.

“Anticipated investments include new data centers, new power generation and grid infrastructure to meet surging data center demand, along with AI training programs and apprenticeships for businesses,” a preview of McCormick’s speech adds.

COO of Blackstone, an alternate asset manager, is expected to announce a $25 billion investment into the infrastructure needed for AI, including energy and data centers. It will be expected to spur 6,000 annual jobs in construction and 3,000 permanent jobs.

Keep reading

The ‘Economy of Genocide’ Report: A Reckoning Beyond Rhetoric

Francesca Albanese, the United Nations Special Rapporteur on the situation of human rights in occupied Palestine, stands as a testament to the notion of speaking truth to power. This “power” is not solely embodied by Israel or even the United States, but by an international community whose collective relevance has tragically failed to stem the ongoing genocide in Gaza.

Her latest report, ‘From Economy of Occupation to Economy of Genocide,’ submitted to the UN Human Rights Council on July 3, marks a seismic intervention. It unflinchingly names and implicates companies that have not only allowed Israel to sustain its war and genocide against Palestinians, but also confronts those who have remained silent in the face of this unfolding horror.

Albanese’s ‘Economy of Genocide’ is far more than an academic exercise or a mere moral statement in a world whose collective conscience is being brutally tested in Gaza. The report is significant for multiple, interlocking reasons. Crucially, it offers practical pathways to accountability that transcend mere diplomatic and legal rhetoric. It also presents a novel approach to international law, positioning it not as a delicate political balancing act, but as a potent tool to confront complicity in war crimes and expose the profound failures of existing international mechanisms in Gaza.

Two vital contexts are important to understanding the significance of this report, considered a searing indictment of direct corporate involvement, not only in the ongoing Israeli genocide in Gaza, but Israel’s overall settler-colonial project.

First, in February 2020, following years of delay, the UN Human Rights Council (UNHRC) released a database that listed 112 companies involved in business activities within illegal Israeli settlements in occupied Palestine. The database exposes several corporate giants – including Airbnb, Booking.com, Motorola Solutions, JCB, and Expedia – for helping Israel maintain its military occupation and apartheid.

This event was particularly earth-shattering, considering the United Nations’ consistent failure at reining in Israel, or holding accountable those who sustain its war crimes in Palestine. The database was an important step that allowed civil societies to mobilize around a specific set of priorities, thus pressuring corporations and individual governments to take morally guided positions. The effectiveness of that strategy was clearly detected through the exaggerated and angry reactions of the US and Israel. The US said it was an attempt by “the discredited” Council “to fuel economic retaliation,” while Israel called it a “shameful capitulation” to pressure.

The Israeli genocide in Gaza, starting on October 7, 2023, however, served as a stark reminder of the utter failure of all existing UN mechanisms to achieve even the most modest expectations of feeding a starving population during a time of genocide. Tellingly, this was the same conclusion offered by UN Secretary-General António Guterres, who, in September 2024, stated that the world had “failed the people of Gaza.”

This failure continued for many more months and was highlighted in the UN’s inability to even manage the aid distribution in the Strip, entrusting the job to the so-called Gaza Humanitarian Foundation, a mercenary-run violent apparatus that has killed and wounded thousands of Palestinians. Albanese herself, of course, had already reached a similar conclusion when, in November 2023, she confronted the international community for “epically failing” to stop the war and to end the “senseless slaughtering of innocent civilians.”

Albanese’s new report goes a step further, this time appealing to the whole of humanity to take a moral stance and to confront those who made the genocide possible. “Commercial endeavors enabling and profiting from the obliteration of innocent people’s lives must cease,” the report declares, pointedly demanding that “corporate entities must refuse to be complicit in human rights violations and international crimes or be held to account.”

According to the report, categories of complicity in the genocide are divided into arms manufacturers, tech firms, building and construction companies, extractive and service industries, banks, pension funds, insurers, universities, and charities.

These include Lockheed Martin, Microsoft, Amazon, Palantir, IBM, and even Danish shipping giant Maersk, among nearly 1,000 other firms. It was their collective technological know-how, machinery, and data collection that allowed Israel to kill, to date, over 57,000 and wound over 134,000 in Gaza, let alone maintain the apartheid regime in the West Bank.

Keep reading

The Pentagon’s about to start using xAI’s Grok — and other federal agencies could be next

Elon Musk’s xAI is launching a new government-facing service. Its first client happens to be the largest employer on Earth.

The Department of Defense will pay up to $200 million for “Grok for Government,” a new collection of AI products geared toward use by federal, local, and state governments.

The department has also awarded similar contracts to Anthropic, Google, and OpenAI, which launched its own government-facing initiative last month.

“The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries,” Doug Matty, Chief Digital and AI Officer of the Department of Defense, said in a statement.

xAI said its government-facing products would include models designed specifically for national security purposes and eventually for use in classified environments.

The company also said those products would be available for purchase via the General Services Administration, opening the door for other federal agencies to use them.

The announcement comes less than a week after Grok went on an antisemitic rant on X. The company later apologized for the chatbot’s “horrific behavior,” though workers at the company erupted in anger internally over the incident.

Keep reading