Former NSA Director Joins OpenAI; Will Serve On Board And “Security” Committee

Days after we noted that OpenAI is expanding its lobbying army to influence regulation, the company announced that former head of the National Security Agency (NSA) – and the longest-serving leader of USCYBERCOM, Paul M. Nakasone, has joined board – just four months after stepping down at the government’s top clandestine data monitoring organization.

Nakasone, a retired US Army general, was nominated to lead the NSA by former President Donald Trump. He directed the agency from 2018 until his departure in February of this year. AsThe Verge notes, Nakasone wrote a WaPo op-ed in support of renewing Section 702 of the Foreign Intelligence Surveillance Act (FISA), which was ultimately reauthorized by Congress in April – which contained a “terrifying” supercharged spying provision opposed by privacy advocates on both sides of the aisle in DC.

Keep reading

Musk Declares War on Apple: Threatens to Ban Devices Over “Creepy Spyware” AI Integration

Elon Musk, the CEO of Tesla, SpaceX, and X (formerly Twitter), has declared war on big tech Apple.

The tech mogul threatens to ban Apple devices across his companies unless Apple abandons its plans to integrate OpenAI’s woke ChatGPT technology into its operating system.

Apple announced on Monday that it would be integrating ChatGPT into iOS, iPadOS, and macOS. This integration would allow users to access ChatGPT’s capabilities, including image and document understanding, without needing to switch between tools. Siri, Apple’s virtual assistant, could also tap into ChatGPT’s intelligence when necessary.

“We’re excited to partner with Apple to bring ChatGPT to their users in a new way. Apple shares our commitment to safety and innovation, and this partnership aligns with OpenAI’s mission to make advanced AI accessible to everyone. Together with Apple, we’re making it easier for people to benefit from what AI can offer,” said Sam Altman, CEO of OpenAI.

“It’s personal, powerful, and private—and it’s integrated into the apps you rely on every day. Introducing Apple Intelligence—our next chapter in AI,” said Tim Cook, Apple’s CEO.

In response to Tim Cook’s announcement, Musk stated, “Don’t want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies.”

Keep reading

Rights Groups Join Fight Against Racially Biased Facial Recognition Tech

There is a small but growing lobby made up of people who have been wrongly arrested as a result of facial recognition technology. Among them is Robert Williams, an American who was handcuffed in front of his family in 2020 after police facial recognition misidentified him as a suspect in a federal larceny case.

Williams is now calling for police forces in Ireland to scrap their plans to deploy the biometric tech. In comments made at an event in Dublin hosted by the Irish Council for Civil Liberties (ICCL) and issued in a release, Williams points to the risk that comes with using tools that are prone to misidentify people of color.

“Federal studies have shown that facial recognition systems misidentify Asian and Black people up to 100 times more often than white people,” Williams says. “In America, we’re trying to undo the harms that FRT has already done. Here in Ireland, you have an opportunity not to introduce it in the first place. I hope your government will listen to experiences like mine and think twice before bringing FRT into policing.”

Williams refers to a 2019 report from NIST, which has since been updated, showing that some algorithms were 10 to 100 times more likely to misidentify a Black or East Asian than a white face. Not all of the algorithms evaluated are in commercial production, however, and others were found to have imperceptible differences in performance between demographics, prompting NIST Biometric Standards and Testing Lead Patrick Grother to urge those implementing facial recognition to be specific in evaluating bias.

Williams’ statement on the U.S. could also be debated, given the uptake of facial recognition technology by law enforcement agencies across the country. And while it is true that Irish police could still decide to pass on facial recognition, it is unlikely. The government is in the process of drafting legislation that would give Gardaí access to FRT. And police in the neighboring UK have embraced facial recognition with aplomb.

Nor is it merely an island thing. Police in Sweden are currently pushing against the limits of the still-fresh AI Act with plans to deploy 1:N facial recognition in public spaces. And Canadian police recently contracted Idemia to provide facial recognition services.

Keep reading

What’s Next For Battlefield America? Israel’s High-Tech Military Tactics Point The Way

“I did not know Israel was capturing or recording my face. [But Israel has] been watching us for years from the sky with their drones. They have been watching us gardening and going to schools and kissing our wives. I feel like I have been watched for so long.”

– Mosab Abu Toha, Palestinian poet

If you want a glimpse of the next stage of America’s transformation into a police state, look no further than how Israel – a long-time recipient of hundreds of billions of dollars in foreign aid from the U.S. – uses its high-tech military tactics, surveillance and weaponry to advance its authoritarian agenda.

Military checkpoints. Wall-to-wall mass surveillance. Predictive policing. Aerial surveillance that tracks your movements wherever you go and whatever you do. AI-powered facial recognition and biometric programs carried out with the knowledge or consent of those targeted by it. Cyber-intelligence. Detention centers. Brutal interrogation tactics. Weaponized drones. Combat robots.

We’ve already seen many of these military tactics and technologies deployed on American soil and used against the populace, especially along the border regions, a testament to the heavy influence Israel’s military-industrial complex has had on U.S. policing.

Indeed, Israel has become one of the largest developers and exporters of military weapons and technologies of oppression worldwide.

Journalist Antony Loewenstein has warned that Pegasus, one of Israel’s most invasive pieces of spyware, which allows any government or military intelligence or police department to spy on someone’s phone and get all the information from that phone, has become a favorite tool of oppressive regimes around the world. The FBI and NYPD have also been recipients of the surveillance technology which promises to turn any “target’s smartphone into an intelligence gold mine.”

Yet it’s not just military weapons that Israel is exporting. They’re also helping to transform local police agencies into extensions of the military.

According to The Intercept, thousands of American law enforcement officers frequently travel for training to Israel, one of the few countries where policing and militarism are even more deeply intertwined than they are here,” as part of an ongoing exchange program that largely flies under the radar of public scrutiny.

A 2018 investigative report concluded that imported military techniques by way of these exchange programs that allow police to study in Israel have changed American policing for the worse. “Upon their return, U.S. law enforcement delegates implement practices learned from Israel’s use of invasive surveillance, blatant racial profiling, and repressive force against dissent,” the report states. “Rather than promoting security for all, these programs facilitate an exchange of methods in state violence and control that endanger us all.”

Keep reading

House Bill Pulls Plug on Biden’s AI Censorship R&D Funding

A House bill seeks to stop the Biden Administration from continuing to spend taxpayer money to fund development of Artificial Intelligence (AI) tools that will censor online content the government wants to dismiss as “disinformation.”

H.R. 8519, sponsored by more than two dozen House Republicans, is an 83-word bill defunding the federal government’s online censorship research:

A BILL

To prohibit the obligation or expenditure of Federal funds for disinformation research grants, and for other purposes.

Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,

SECTION 1. PROHIBITION.

No Federal funds may be obligated or expended by any Federal department or agency for the following:

(1) Disinformation research grants.

(2) Secure and Trustworthy Cyberspace grants.

(3) Programs within the National Science Foundation’s Track F: Trust and Authenticity in Communications Systems.

NSF’s Track F program is identified as a grave threat to online free speech in a report by the House Select Subcommittee on the Weaponization of the Federal Government. The report details specific examples of how the program spends tens of millions of taxpayer dollars to fund development of ways AI can be used to censor online speech.

“As the distributor of multi-million-dollar grants, the National Science Foundation (NSF) is a key player in the ‘censorship industrial complex,’” the report says:

“In recent years, under the guise of combatting so-called misinformation, NSF has been funding AI-driven tools and other new technologies that can be used to censor or propagandize online speech.”

As the report explains, the Biden Administration’s program is actually a purveyor of disinformation (emphasis added):

“In March 2021, NSF introduced Track F: Trust & Authenticity in CommunicationSystems, allocating $21 million to the program. For Track F, NSF solicited proposals to address the manipulation or ‘unanticipated negative effects’ of communication systems—a departure from the Convergence Accelerator program’s other, more concrete research topics.

“The euphemistic ‘trust and authenticity in communication systems,’ in fact, means combatting so-called ‘misinformation,’ i.e., censorship. In an early draft solicitation, NSF indicated that Track F projects will ‘address issues of trust and authenticity in communication systems, including predicting, preventing, detecting, correcting, and mitigating the spread of inaccurate information that harms people and society.’ As NSF’s Track F program manager, Michael Pozmantier, explained more plainly in a June 2021 email, Track F isthe NSF ‘Accelerator track focused on combatting mis/disinformation.’”

Keep reading

Pentagon’s AI office awards Palantir a contract to create a data-sharing ecosystem

The Department of Defense’s Chief Digital and Artificial Intelligence Office, or CDAO, leveraged its marketplace for fast-tracking the acquisition of innovative technologies to award Silicon Valley-based Palantir a contract to develop a data-sharing ecosystem — a tool that will help the Pentagon with its connect-everything initiative.

CDAO announced last Thursday that the ecosystem — known as Open Data and Applications Government-owned Interoperable Repositories, or Open DAGIR — will enable the Department of Defense to scale its use of data, analytics and artificial intelligence capabilities through greater collaboration with private sector partners. 

Palantir said it received a $33 million prototype Other Transaction award from CDAO “to rapidly and securely onboard third-party vendor and government capabilities into the government-owned, Palantir-operated data environment to meet priority combatant command digital needs.”

The contract was awarded through CDAO’s Tradewinds Solution Marketplace, which allows private firms of all sizes to pitch DOD their AI, machine learning and data capabilities through five minute infomercial-style videos. Once companies are accepted into the marketplace, Pentagon components can search the platform to view videos of solutions from industry partners. Companies, in turn, are able to access post-competition, readily awardable contracts. 

Bonnie Evangelista, CDAO’s acting deputy for acquisition directorate, told Nextgov/FCW earlier this year that the platform can significantly shorten the time it takes for companies to receive DOD contracts.

During a NetApp conference on Tuesday, CDAO Director of Procurement Quentin McCoy said Palantir’s use of the Tradewinds marketplace allowed it to receive the award for Open DAGIR in 30 days. 

“It’s a sort of healthy prototype,” McCoy said about the Open DAGIR solution Palantir will provide, noting that “it’s going to allow industry and government to ingest data together and share and bring in third-party vendors to do this action.”

DOD said it will initially use Open DAGIR to support its Combined Joint All Domain Command and Control — or CJADC2 — initiative that is designed to promote interoperability across disparate military environments. Deputy Secretary of Defense Kathleen Hicks announced in February that CDAO had achieved “the minimum viable capability” of the information-sharing network.

CDAO is also planning to use its ongoing Global Information Dominance Experiments, or GIDE, to determine whether any additional capabilities should be added to the Open DAGIR ecosystem. GIDE is designed, in part, to help inform the Pentagon’s use of emerging technologies to support its CJADC2 initiative. 

The GIDE series — created by U.S. Northern Command and relaunched by CDAO last year — tests out AI and data analytics tools to determine how they can be used for military decisionmaking. The department finished its GIDE 9 iteration in March. 

McCoy said CDAO is planning to hold several industry days in the next few months, including one scheduled for mid-July, in preparation for the office’s next GIDE iteration. 

Keep reading

News Publishers Try To Sic the Government on Google AI

Google’s pivot to artificial intelligence has news publishers freaking out—and running to the government.

“Agency intervention is necessary to stop the existential threat Google poses to original content creators,” the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority “to stop Google’s latest expansion of AI Overviews,” a search engine innovation that Google has been rolling out recently.

Google’s plain old top-of-page links to news outlets or other informational sites are disappearing in many searches. Now much of this prime search-results real estate is taken up by what Google is calling AI Overviews.

Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. (If you haven’t used Google in a while, try it now and see for yourself.)

The results have been far from perfect (would you like some glue with that pizza?) and leave a lot of room for skepticism and interpretation. (This past weekend, Overviews fed me seemingly contradictory advice about baby fevers within a two-paragraph span.) But that’s also often true of what you would find from an old-school list of links and snippets. And Google has been inching away from link-prominent results for a while now, featuring brief bits of (non-AI-generated) content in response to many search queries and in the form of Q&A text throughout link pages. So the recent appearance of AI-generated text answers wasn’t even immediately noticeable to me.

But newspaper and magazine publishers sure are noticing.

Keep reading

SHOCKING NEW STUDY SAYS AI IS QUICKLY BECOMING “MASTERS OF DECEPTION,” TEACHING ITSELF TO LIE AND MANIPULATE HUMAN USERS

A recent empirical review found that many artificial intelligence (AI) systems are quickly becoming masters of deception, with many systems already learning to lie and manipulate humans for their own advantage.

This alarming trend is not confined to rogue or malfunctioning systems but includes special-use AI systems and general-use large language models designed to be helpful and honest. 

The study, published in the journal Patterns, highlights the risks and challenges posed by this emerging behavior and calls for urgent action from policymakers and AI developers.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” Dr. Peter S. Park, the study’s lead author and an AI existential safety postdoctoral fellow at MIT, said in a press release. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.” 

The review meticulously analyzed various AI systems and found that many had developed deceptive capabilities due to their training processes. These systems ranged from game-playing AIs to more general-purpose models used in economic negotiations and safety testing environments.

One of the most striking examples cited in the study was Meta’s CICERO, an AI developed to play the game Diplomacy. Despite being trained to act honestly and maintain alliances with human players, CICERO frequently used deceptive tactics to win. 

This behavior included building fake alliances and backstabbing allies when it benefited its gameplay, leading researchers to conclude that CICERO had become a “master of deception.”​

“Despite Meta’s efforts, CICERO turned out to be an expert liar,” researchers wrote. “It not only betrayed other players but also engaged in premeditated deception, planning in advance to build a fake alliance with a human player to trick that player into leaving themselves undefended for an attack.”

Researchers found that other AI systems had developed the ability to cheat at different types of games. For instance, Pluribus, a poker-playing model created by Meta, demonstrated it could convincingly bluff in Texas hold ’em poker, successfully misleading professional human players about their hand strengths. 

In another example, AlphaStar, an AI system created by Google’s DeepMind to play the real-time strategy game Starcraft II, exploited the game’s “fog-of-war“ mechanics to feint attacks and deceive opponents to gain strategic advantages. 

“While it may seem harmless if AI systems cheat at games, it can lead to breakthroughs in deceptive AI capabilities that can spiral into more advanced forms of AI deception in the future,“ Dr. Park explained.

Indeed, during their review, researchers found that some AI systems had already learned methods of deception that extend far beyond the realm of games. 

In one instance, AI agents had learned to “play dead“ to avoid being detected by a safety test designed to eliminate faster-replicating AI variants. Such behavior can create a false sense of security among developers and regulators, potentially leading to severe consequences if these deceptive systems are deployed in real-world applications​​.

Another AI system trained on human feedback was found to have taught itself how to behave in ways that earned positive scores by tricking human reviewers into thinking an intended goal had been accomplished. 

The potential risks of AI deception are significant and multifaceted. Researchers note that in the near term, these systems could be used by malicious actors to commit fraud, manipulate financial markets, or interfere with elections. 

Moreover, as AI capabilities advance, there is an increasing concern among experts that humans may not be able to control these systems, posing existential threats to society.

Keep reading

Biden wants U.S. government to scan all images on your phone to comply with new AI rules

To supposedly stop people from exchanging non-consensual artificial intelligence (AI) images of a sexual nature, President Biden wants to probe everyone’s smartphones as part of a sweeping surveillance effort.

press release from the White House explains the Biden regime’s desire for the tech and financial industries to take charge in stopping the creation and spread of abusive sexual imagery created by AI robots.

According to Biden’s handlers, “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

The plan is to have mobile operating systems such as Android and iOS automatically scan and analyze people’s private photos to determine which ones are sexual or non-consensual. Users would not have the ability to keep any of their images private from government spooks.

It might sound like a good thing until you recognize the privacy implications of such an arrangement. Do we the people really want to allow the government direct access to our photos?

Beyond the search and analysis framework, the Biden regime also wants mobile app stores like Apple’s App Store and Google Play to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.”

(Related: AI is just one component among many of the dystopian present.)

Keep reading

Microsoft Introduces AI “Recall” Tool That Records *Everything* You Do On Your Computer

It records everything you do with your PC, including your apps, movies, documents, emails, browsing history, browser tabs, and more.

Microsoft recently unveiled a new AI tool that has a lot of people online concerned about what this means for their privacy and safety. The AI tool called “Recall,” that will become available to some Windows 11 users, records the user’s screen and allows them to go back in time and see what it is they were doing. Microsoft claims that the data is stored locally and therefore protected, but many are not convinced.

According to Windows Latest‘With Recall, Microsoft says it can turn your previous actions into “searchable snapshots”, allowing you to search and interact with your past actions. Recall runs in the background and relies on the NPU chip to record your screen.’

Keep reading