Pentagon’s AI office awards Palantir a contract to create a data-sharing ecosystem

The Department of Defense’s Chief Digital and Artificial Intelligence Office, or CDAO, leveraged its marketplace for fast-tracking the acquisition of innovative technologies to award Silicon Valley-based Palantir a contract to develop a data-sharing ecosystem — a tool that will help the Pentagon with its connect-everything initiative.

CDAO announced last Thursday that the ecosystem — known as Open Data and Applications Government-owned Interoperable Repositories, or Open DAGIR — will enable the Department of Defense to scale its use of data, analytics and artificial intelligence capabilities through greater collaboration with private sector partners. 

Palantir said it received a $33 million prototype Other Transaction award from CDAO “to rapidly and securely onboard third-party vendor and government capabilities into the government-owned, Palantir-operated data environment to meet priority combatant command digital needs.”

The contract was awarded through CDAO’s Tradewinds Solution Marketplace, which allows private firms of all sizes to pitch DOD their AI, machine learning and data capabilities through five minute infomercial-style videos. Once companies are accepted into the marketplace, Pentagon components can search the platform to view videos of solutions from industry partners. Companies, in turn, are able to access post-competition, readily awardable contracts. 

Bonnie Evangelista, CDAO’s acting deputy for acquisition directorate, told Nextgov/FCW earlier this year that the platform can significantly shorten the time it takes for companies to receive DOD contracts.

During a NetApp conference on Tuesday, CDAO Director of Procurement Quentin McCoy said Palantir’s use of the Tradewinds marketplace allowed it to receive the award for Open DAGIR in 30 days. 

“It’s a sort of healthy prototype,” McCoy said about the Open DAGIR solution Palantir will provide, noting that “it’s going to allow industry and government to ingest data together and share and bring in third-party vendors to do this action.”

DOD said it will initially use Open DAGIR to support its Combined Joint All Domain Command and Control — or CJADC2 — initiative that is designed to promote interoperability across disparate military environments. Deputy Secretary of Defense Kathleen Hicks announced in February that CDAO had achieved “the minimum viable capability” of the information-sharing network.

CDAO is also planning to use its ongoing Global Information Dominance Experiments, or GIDE, to determine whether any additional capabilities should be added to the Open DAGIR ecosystem. GIDE is designed, in part, to help inform the Pentagon’s use of emerging technologies to support its CJADC2 initiative. 

The GIDE series — created by U.S. Northern Command and relaunched by CDAO last year — tests out AI and data analytics tools to determine how they can be used for military decisionmaking. The department finished its GIDE 9 iteration in March. 

McCoy said CDAO is planning to hold several industry days in the next few months, including one scheduled for mid-July, in preparation for the office’s next GIDE iteration. 

Keep reading

News Publishers Try To Sic the Government on Google AI

Google’s pivot to artificial intelligence has news publishers freaking out—and running to the government.

“Agency intervention is necessary to stop the existential threat Google poses to original content creators,” the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority “to stop Google’s latest expansion of AI Overviews,” a search engine innovation that Google has been rolling out recently.

Google’s plain old top-of-page links to news outlets or other informational sites are disappearing in many searches. Now much of this prime search-results real estate is taken up by what Google is calling AI Overviews.

Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. (If you haven’t used Google in a while, try it now and see for yourself.)

The results have been far from perfect (would you like some glue with that pizza?) and leave a lot of room for skepticism and interpretation. (This past weekend, Overviews fed me seemingly contradictory advice about baby fevers within a two-paragraph span.) But that’s also often true of what you would find from an old-school list of links and snippets. And Google has been inching away from link-prominent results for a while now, featuring brief bits of (non-AI-generated) content in response to many search queries and in the form of Q&A text throughout link pages. So the recent appearance of AI-generated text answers wasn’t even immediately noticeable to me.

But newspaper and magazine publishers sure are noticing.

Keep reading

SHOCKING NEW STUDY SAYS AI IS QUICKLY BECOMING “MASTERS OF DECEPTION,” TEACHING ITSELF TO LIE AND MANIPULATE HUMAN USERS

A recent empirical review found that many artificial intelligence (AI) systems are quickly becoming masters of deception, with many systems already learning to lie and manipulate humans for their own advantage.

This alarming trend is not confined to rogue or malfunctioning systems but includes special-use AI systems and general-use large language models designed to be helpful and honest. 

The study, published in the journal Patterns, highlights the risks and challenges posed by this emerging behavior and calls for urgent action from policymakers and AI developers.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” Dr. Peter S. Park, the study’s lead author and an AI existential safety postdoctoral fellow at MIT, said in a press release. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.” 

The review meticulously analyzed various AI systems and found that many had developed deceptive capabilities due to their training processes. These systems ranged from game-playing AIs to more general-purpose models used in economic negotiations and safety testing environments.

One of the most striking examples cited in the study was Meta’s CICERO, an AI developed to play the game Diplomacy. Despite being trained to act honestly and maintain alliances with human players, CICERO frequently used deceptive tactics to win. 

This behavior included building fake alliances and backstabbing allies when it benefited its gameplay, leading researchers to conclude that CICERO had become a “master of deception.”​

“Despite Meta’s efforts, CICERO turned out to be an expert liar,” researchers wrote. “It not only betrayed other players but also engaged in premeditated deception, planning in advance to build a fake alliance with a human player to trick that player into leaving themselves undefended for an attack.”

Researchers found that other AI systems had developed the ability to cheat at different types of games. For instance, Pluribus, a poker-playing model created by Meta, demonstrated it could convincingly bluff in Texas hold ’em poker, successfully misleading professional human players about their hand strengths. 

In another example, AlphaStar, an AI system created by Google’s DeepMind to play the real-time strategy game Starcraft II, exploited the game’s “fog-of-war“ mechanics to feint attacks and deceive opponents to gain strategic advantages. 

“While it may seem harmless if AI systems cheat at games, it can lead to breakthroughs in deceptive AI capabilities that can spiral into more advanced forms of AI deception in the future,“ Dr. Park explained.

Indeed, during their review, researchers found that some AI systems had already learned methods of deception that extend far beyond the realm of games. 

In one instance, AI agents had learned to “play dead“ to avoid being detected by a safety test designed to eliminate faster-replicating AI variants. Such behavior can create a false sense of security among developers and regulators, potentially leading to severe consequences if these deceptive systems are deployed in real-world applications​​.

Another AI system trained on human feedback was found to have taught itself how to behave in ways that earned positive scores by tricking human reviewers into thinking an intended goal had been accomplished. 

The potential risks of AI deception are significant and multifaceted. Researchers note that in the near term, these systems could be used by malicious actors to commit fraud, manipulate financial markets, or interfere with elections. 

Moreover, as AI capabilities advance, there is an increasing concern among experts that humans may not be able to control these systems, posing existential threats to society.

Keep reading

Biden wants U.S. government to scan all images on your phone to comply with new AI rules

To supposedly stop people from exchanging non-consensual artificial intelligence (AI) images of a sexual nature, President Biden wants to probe everyone’s smartphones as part of a sweeping surveillance effort.

press release from the White House explains the Biden regime’s desire for the tech and financial industries to take charge in stopping the creation and spread of abusive sexual imagery created by AI robots.

According to Biden’s handlers, “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

The plan is to have mobile operating systems such as Android and iOS automatically scan and analyze people’s private photos to determine which ones are sexual or non-consensual. Users would not have the ability to keep any of their images private from government spooks.

It might sound like a good thing until you recognize the privacy implications of such an arrangement. Do we the people really want to allow the government direct access to our photos?

Beyond the search and analysis framework, the Biden regime also wants mobile app stores like Apple’s App Store and Google Play to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.”

(Related: AI is just one component among many of the dystopian present.)

Keep reading

Microsoft Introduces AI “Recall” Tool That Records *Everything* You Do On Your Computer

It records everything you do with your PC, including your apps, movies, documents, emails, browsing history, browser tabs, and more.

Microsoft recently unveiled a new AI tool that has a lot of people online concerned about what this means for their privacy and safety. The AI tool called “Recall,” that will become available to some Windows 11 users, records the user’s screen and allows them to go back in time and see what it is they were doing. Microsoft claims that the data is stored locally and therefore protected, but many are not convinced.

According to Windows Latest‘With Recall, Microsoft says it can turn your previous actions into “searchable snapshots”, allowing you to search and interact with your past actions. Recall runs in the background and relies on the NPU chip to record your screen.’

Keep reading

The internet is disappearing, with a quarter of all webpages from 2013 to 2023 going the way of the dodo

Well, so long and thanks for all the fish. A study from the Pew Research Center entitled “When Online Content Disappears” indicates that our beloved internet may well be disappearing beneath our fingers—with a quarter of all webpages that existed between 2013 and 2023 found to be no longer accessible.

Contrary to the popular perception that everything committed to the interwebs is destined to exist forever, the study revealed that 38% of pages that existed in 2013 alone have now been lost (via The Independent). It doesn’t appear to be an age-related phenomenon, either. 

Even newer pages appear to be performing vanishing acts—eight percent of pages that existed in 2023 were found to be unavailable, too.

The study made use of Common Crawl, an open repository of web crawl data that archives billions of webpages and provides archives and datasets for public use. The researchers took random samples of over a million webpages, before checking the links to see which were still active, and which had gone to the great lost information archive in the sky.

The results showed 23% of news pages and 21% of government websites studied were found to include at least one broken link, while a staggering 54% of Wikipedia pages included a reference link that no longer exists. That’s a lot of facts that can no longer be reasonably checked.

Given the internet’s integral role in modern society (for better or worse) in terms of verifying information, these results are troubling. What with the increasing proliferation of misleading AI content, losing valuable sources of information pre AI-era can’t possibly help.

Compounding this slide into a murky world where verifiable information is increasingly harder to find, a recent study found that 46.9% of all internet traffic could be attributed to bots—many of which may be contributing all sorts of made-up information to further muddy the waters.

Keep reading

Google’s Woke AI Is Hilariously But Frighteningly Broken

Google’s hastily rolled out AI Overview feature is disastrously broken, returning searches claiming that people should spread glue on pizzas, eat rocks, and that it’s safe for pregnant women to smoke cigarettes.

The Verge reports that Google is scrambling to manually disable the AI Overview feature for certain searches after users found it giving our some truly bizarre advice, and information that is just made up nonsense.

Apparently cockroaches are so named because they live in penis holes.

Keep reading

Biden’s Bold Move to Combat AI Abuse Stirs Surveillance and Censorship Fears

The Biden administration is pushing for sweeping measures to combat the proliferation of nonconsensual sexual AI-generated images, including controversial proposals that could lead to extensive on-device surveillance and control of the types of images generated. In a White House press release, President Joe Biden’s administration outlined demands for the tech industry and financial institutions to curb the creation and distribution of abusive sexual images made with artificial intelligence (AI).

A key focus of these measures is the use of on-device technology to prevent the sharing of nonconsensual sexual images. The administration stated that “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

This proposal implies that mobile operating systems would need to scan and analyze images directly on users’ devices to determine if they are sexual or non-consensual. The implications of such surveillance raise significant privacy concerns, as it involves monitoring and analyzing private content stored on personal devices.

Additionally, the administration is calling on mobile app stores to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.” This broad mandate would require a wide range of apps, including image editing and drawing apps, to scan and monitor user activities on devices, analyze what art they’re creating and block the creation of certain kinds of content. Once this technology of on-device monitoring becomes normalized, this level of scrutiny could extend beyond the initial intent, potentially leading to censorship of other types of content that the administration finds objectionable.

The administration’s call to action extends to various sectors, including AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google. By encouraging cooperation from these entities, the White House hopes to curb the creation, spread, and monetization of nonconsensual AI images.

Keep reading

European Council Approves the AI Act — a Law Accused of Legalizing Biometric Mass Surveillance

The EU’s European Council has followed the European Parliament (EP) in approving the AI Act – which opponents say is a way for the bloc to legalize biometric mass surveillance.

More than that, the EU is touting the legislation as first of its kind in the world, and seems hopeful it will serve as a standard for AI regulation elsewhere around the globe.

The Council announced the law is “groundbreaking,” taking a “risk-based” approach, meaning that the EU authorities get to grade the level of risk from AI to society and then impose rules of various levels of severity and penalties, including money fines for companies deemed to be infringing the act.

What this “granular” approach to “risk level” looks like is revealed in the fact that what the EU chooses to consider cognitive behavioral manipulation “unacceptable,” while AI use in education and facial recognition is “high risk. “Limited risk” applies to chatbots.

And developers will be under obligation to register in order to have the “risk” assessed before their apps become available to users in the EU.

The AI Act’s ambition, according to the EU, is to promote both the development and uptake, as well as investment in systems that it considers “safe and trustworthy,” targeting both private and public sectors for this type of regulation.

A press release said that the law “provides exemptions such as for systems used exclusively for military and defense as well as for research purposes.”

After the act is formally published, it will within three weeks come into effect across the 27-member countries.

Back in March, when the European Parliament approved the act, one of its members, Patrick Breyer of the German Pirate Party, slammed the preceding trilogue negotiations as “intransparent.”

Keep reading

Google’s AI-First Ambitions Sideline Publishers, Boost Its Ability To Filter and Control Information

The internet’s most frequented page is on the verge of a transformation unlike any in its 25-year history.

Last week, at Google I/O 2024, as Liz Reid, Google’s head of Search, gushed on stage about their AI-powered future, one couldn’t help but feel a pang of irony. “Google will do the Googling for you,” she proclaimed, envisioning a future where Google’s AI sifts through the web’s content and spits out neatly packaged summaries, removing the need to visit any websites.

How convenient – for Google, that is.

An ideologically driven monopoly further inserting itself between people and content, filtering out what it thinks you should be allowed to see (and what you shouldn’t) at a level never seen before. What could possibly go wrong?

At the event, the tech behemoth unveiled its latest shiny toys – an AI agent named Astra, a potentially reincarnated Google Glass, and something called Gems. Amidst the fanfare, though, there was a glaring omission: any mention of the voices who populate the web with the very work that makes Google’s empire possible.

But the origins of Google’s powerful monopoly and control over much of the internet’s content came a couple of decades ago when publishers and website creators made a deal with a devil whose motto was, at the time, “Don’t be evil.”

Keep reading