Open Source Tools for Fighting Disinformation

Deepfakes and disinformation have the ability to move financial marketsinfluence public opinion, and scam businesses and individuals out of millions of dollars. The Semantic Forensics program (SemaFor) is a DARPA-funded initiative to create comprehensive forensic technologies to help mitigate online threats perpetuated via synthetic and manipulated media. Over the last eight years, Kitware has helped DARPA create a powerful set of tools to analyze whether media has been artificially generated or manipulated. Kitware and DARPA are now bringing those tools out of the lab to defend digital authenticity in the real world.

Kitware has a history of building various image and video forensics algorithms to defend against disinformation by detecting various types of manipulations, beginning with DARPA’s Media Forensics (MediFor) program. Building on this foundation, our team expanded its focus to include multimodal analysis of text, audio, and video under the SemaFor program. For additional information about Kitware’s contributions to SemaFor, check out the “Voices from DARPA” podcast episode, “Demystifying Deepfakes,” where Arslan Basharat, assistant director of computer vision at Kitware, is a guest speaker.

Keep reading

Vitalik Buterin Says OpenAI’s GPT-4 Has Passed The ‘Turing Test’

OpenAI’s GPT-4, a generative artificial intelligence (AI) model, has passed the Turing test, according to Ethereum co-founder Vitalik Buterin. 

The Turing test is a nebulous benchmark for AI systems purported to determine how human-like a conversational model is. The term was coined on account of famed mathematician Alan Turing who proposed the test in 1950.

According to Turing, at the time, an AI system capable of generating text that fools humans into thinking they’re having a conversation with another human would demonstrate the capacity for “thought.”

Nearly 75 years later, the person largely credited with conceiving the world’s second most popular cryptocurrency has interpreted recent preprint research out of the University of California San Diego as indicating that a production model has finally passed the Turing test.

Keep reading

How Much Online Content Will Be Replaced by Artificial Intelligence?

In 2017, Stephen Hawking warned Wired magazine that artificial intelligence could one day outperform humans. “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.” AI has already been incorporated into many aspects of our lives, from medical tests and procedures to gaming and predictive text – but it is now becoming more common to see AI-generated content on the internet, including news sites.

Will Artificial Intelligence Replace Human Writing?

ChatGPT’s most advanced version, GPT-4, can generate documents up to 25,000 words long and comprehend more than 26 languages. It and other artificial intelligence tools are continuing to learn and be incorporated into online content, and many worry that this automated text will soon entirely replace human writing. In fact, a well-known Washington newspaper bragged about using AI to generate more than 850 articles about the 2016 Rio Olympics.

AI-generated content has caused considerable mistrust because of deepfakes, misleading information, and scams. As Liberty Nation reported, the Center for Countering Digital Hate explained that artificial intelligence has a lot of influence and that AI image generators created “election disinformation in 41% of cases, including images that could support false claims about candidates or election fraud.” The Fourth Estate was once a trusted and valuable resource for news, but that public trust has been falling recently. How much worse will it be if outlets start depending on AI to broadcast information?

Keep reading

Ukraine Unveils New AI-Generated Foreign Ministry Spokeswoman

On Wednesday, Ukraine unveiled its new AI-generated Foreign Ministry spokeswoman, who will now make official statements on behalf of Ukraine’s foreign ministry.

The new AI-generated spokeswoman’s name is Victoria Shi, and the AI-bot is modeled after Ukrainian singer Rosalie Nombre.

Ukraine released a teaser of their new AI spokeswoman on their Ministry of Foreign Affairs YouTube page.

In the AI-generated spokeswoman’s first announcement, the digital representative stated, “My name symbolizes our main goal – the victory of Ukraine, and my last name – the artificial intelligence that created me. My work will consist of reporting operational and verified information of the consular department of the MFA of Ukraine to the public.”

Keep reading

Genocide profiteer IBM wins big on EU funding

Israel’s use of artificial intelligence to select targets in Gaza during the current genocide has garnered many headlines.

Few who have paid close attention to how Israel tests new technology on Palestinians can be surprised. Israel had previously signaled that its May 2021 attack on Gaza gave it an opportunity to experiment with AI.

The proper response to those signals would have been to halt any funding of AI research involving Israeli firms and institutions. The European Union has taken the opposite approach.

In September 2023, the EU authorized a project aimed at realizing a future in which collaboration between humans and AI “takes center stage.”

Participants in the project include IBM Israel – a subsidiary of the US-based giant.

IBM has a long and ignoble history of providing technology to abusers of human rights. Among its past clients were the German government during the Nazi era and South Africa’s apartheid regime.

More recently, IBM has been awarded a series of contracts to run technology support centers for the Israeli military. Robotics are a core feature of the latest such center.

It is a near certainty that IBM products can be found in Israel’s toolbox during the current genocide.

No questions about IBM’s ties to the Israeli military seem to have been asked by EU officials before they rubber-stamped the aforementioned project in September.

I have seen a copy of an “ethics check” carried out on the project – named HumAIne – at the EU’s request.

The exercise was one of box-ticking.

It came to the conclusion that HumAIne had an “exclusive focus on civil applications.” The only significant recommendation was that “an independent ethics adviser must be appointed with the relevant accumulated expertise” so that the project could be monitored.

The recommendation did not address IBM’s connections to Israel’s military. It merely referred to “ethical concerns” surrounding the project, particularly “the involvement of humans in the evaluation of AI systems.”

While HumAIne was signed off by the Brussels bureaucracy before the genocidal war on Gaza was declared in October, the EU has okayed a huge number of new research grants to Israel since then.

IBM Israel is among the recipients of those new grants. It is taking part in a project on data-sharing innovations, which the EU authorized in mid-November.

Keep reading

Black athletic director of Baltimore high school arrested for creating AI deepfake of white principal to stage race hoax

A Maryland school athletic director was arrested at BWI-Marshall Airport after it was discovered that he had allegedly spread an AI-generated impersonation of the Pikesville High School principal that framed the principal as being racist. Dhazon Darien, 31, is looking at many charges due to the deepfake, including stalking, theft, disruption of school operations, and retaliation against a witness.

The police investigation began in January when a voice recording alleged to be school principal Eric Eiswert began making the rounds. Eiswert was temporarily removed from his position as head of the school and the school received many phone calls and social media comments and messages over the recording after it was spread on social media, according to The Baltimore Sun.

Keep reading

Report Sounds Alarm Over Growing Role of Big Tech in US Military-Industrial Complex

The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California – a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday.

The report – entitled How Big Tech and Silicon Valley Are Transforming the Military-Industrial Complex – was authored by Roberto J. González, a professor of cultural anthropology at San José State University, for the Costs of War Project at Brown University’s Watson Institute for International & Public Affairs.

The new paper comes amid the contentious rise of AI-powered lethal autonomous weapons systems, or killer robots; increasing reliance upon AI on battlefields from Gaza to Ukraine; and growing backlash from tech workers opposed to their companies’ products and services being used to commit or enable war crimes.

“Although much of the Pentagon’s $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms,” González wrote.

“As Defense Department officials have sought to adopt AI-enabled systems and secure cloud computing services, they have awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle,” he added. “At the same time, the Pentagon has increased funding for smaller defense tech startups seeking to ‘disrupt’ existing markets and ‘move fast and break things.’”

Keep reading

AI Chatbots Refuse To Produce ‘Controversial’ Output – Why That’s A Free Speech Problem

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Keep reading

AI Is Now Dogfighting With Fighter Pilots In The Air

Last year, the uniquely modified F-16 test jet known as the X-62A, flying in a fully autonomous mode, took part in a first-of-its-kind dogfight against a crewed F-16, the U.S. military has announced. This breakthrough test flight, during which a pilot was in the X-62A’s cockpit as a failsafe, was the culmination of a series of milestones that led 2023 to be the year that “made machine learning a reality in the air,” according to one official. These developments are a potentially game-changing means to an end that will feed directly into future advanced uncrewed aircraft programs like the U.S. Air Force’s Collaborative Combat Aircraft effort.

Details about the autonomous air-to-air test flight were included in a new video about the Defense Advanced Research Projects Agency’s (DARPA) Air Combat Evolution (ACE) program and its achievements in 2023. The U.S. Air Force, through the Air Force Test Pilot School (USAF TPS) and the Air Force Research Laboratory (AFRL), is a key participant in the ACE effort. A wide array of industry and academic partners are also involved in ACE. This includes Shield AI, which acquired Heron Systems in 2021. Heron developed the artificial intelligence (AI) ‘pilot’ that won DARPA’s AlphaDogfight Trials the preceding year, which were conducted in an entirely digital environment, and subsequently fed directly into ACE.

“2023 was the year ACE made machine learning a reality in the air,” Air Force Lt. Col. Ryan Hefron, the ACE program manager, says in the newly released video, seen in full below.

Keep reading

MICROSOFT PITCHED OPENAI’S DALL-E AS BATTLEFIELD TOOL FOR U.S. MILITARY

MICROSOFT LAST YEAR proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

Keep reading