Journalist Uses AI Voice to Break into Own Bank Account

In a recent experiment, Vice.com writer Joseph Cox used an AI-generated voice to bypass Lloyds Bank security and access his account.

To achieve this, Cox used a free service of ElevenLabs, an AI-voice generation company that supplies voices for newsletters, books and videos.

Cox recorded five minutes of speech and uploaded it to ElevenLabs. After making some adjustments, such as having the AI read a longer body of text for a more natural cadence, the generated audio outmaneuvered Lloyds security.

“I couldn’t believe it had worked,” Cox wrote in his Vice article. “I had used an AI-powered replica of a voice to break into a bank account. After that, I accessed the account information, including balances and a list of recent transactions and transfers.”

Multiple United States and European banks use voice authentication to speed logins over the phone. While some banks claim that voice identification is comparable to a fingerprint, this experiment demonstrates that voice-based biometric security does not offer perfect protection.

ElevenLabs did not comment on the hack despite multiple requests, Cox says. However, in a previous statement, the firm’s co-founder, Mati Staniszewski, said new safeguards reduce misuse and support authorities in identifying those who break the law.

Keep reading

Censors Use AI to Target Podcasts

Elon Musk’s purchase of Twitter may have capped the opening chapter in the Information Wars, where free speech won a small but crucial battle. Full spectrum combat across the digital landscape, however, will only intensify, as a new report from the Brookings Institution, a key player in the censorship industrial complex, demonstrates. 

First, a review.

Reams of internal documents, known as the Twitter Files, show that social media censorship in recent years was far broader and more systematic than even we critics suspected. Worse, the files exposed deep cooperation – even operational integration – among Twitter and dozens of government agencies, including the FBI, Department of Homeland Security, DOD, CIA, Cybersecurity Infrastructure Security Agency (CISA), Department of Health and Human Services, CDC, and, of course, the White House. 

Government agencies also enlisted a host of academic and non-profit organizations to do their dirty work. The Global Engagement Center, housed in the State Department, for example, was originally launched to combat international terrorism but has now been repurposed to target Americans.

The US State Department also funded a UK outfit called the Global Disinformation Index, which blacklists American individuals and groups and convinces advertisers and potential vendors to avoid them. Homeland Security created the Election Integrity Partnership (EIP) – including the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, and the Atlantic Council’s DFRLab – which flagged for social suppression tens of millions of messages posted by American citizens.

Even former high government US officials got in on the act – appealing directly (and successfully) to Twitter to ban mischief-making truth-tellers. 

With the total credibility collapse of legacy media over the last 15 years, people around the world turned to social media for news and discussion. When social media then began censoring the most pressing topics, such as Covid-19, people increasingly turned to podcasts. Physicians and analysts who’d been suppressed on Twitter, Facebook, and YouTube, and who were of course nowhere to be found in legacy media, delivered via podcasts much of the very best analysis on the broad array of pandemic science and policy. 

Which brings us to the new report from Brookings, which concludes that one of the most prolific sources of ‘misinformation’ is now – you guessed it – podcasts. And further, that the underregulation of podcasts is a grave danger.

Keep reading

Joe Biden Releases Executive Order Promoting Woke AI

Critics have claimed that the new executive order signed by President Joe Biden will lead to the further creation of woke AI that will promote “racial division and discrimination” in the name of an “equity action plan.”

The Electronic Privacy Information Center reports that President Joe Biden has approved an executive order directing federal organizations to create an annual “equity action plan” to support underserved communities. The order instructs organizations to use AI in a way that promotes equity and complies with the law. The decision has caused debate, with opponents expressing concern about the development of woke AI that encourages racial animosity and discrimination.

The executive order, which is titled “Advancing Racial Equity and Support for Underserved Communities Through the Federal Government,” seeks to combat inequality in a number of fields, including healthcare, education, housing, and criminal justice. The order outlines various steps that agencies should take to ensure that all Americans receive equitable treatment and opportunities and declares that “advancing equity is a moral imperative.”

The order’s provisions include a section titled “Embedding Equity into Government-wide Processes.” It instructs the Office of Management and Budget’s Director to encourage equitable decision-making, advance equitable financial and technical assistance allocation, and support agencies in advancing equity, as appropriate and whenever possible.

The section also provides additional guidelines for using AI, stating that “When designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so, consistent with applicable law, in a manner that advances equity.”

Keep reading

Biden Admin to Drop Half a Million on Artificial Intelligence That Detects Microaggressions on Social Media

The Biden administration is set to dole out more than $550,000 in grants to develop an artificial intelligence model that can automatically detect and suppress microaggressions on social media, government spending records show.

The award, funded through President Joe Biden’s $1.9 trillion American Rescue Plan, was granted to researchers at the University of Washington in March to develop technologies that could be used to protect online users from discriminatory language. The researchers have already received $132,000 and expect total government funding to reach $550,436 over the next five years.

The researchers are developing machine-learning models that can analyze social media posts to detect implicit bias and microaggressions, commonly defined as slights that cause offense to members of marginalized groups. It’s a broad category, but past research conducted by the lead researcher on the University of Washington project suggests something as tame as praising meritocracy could be considered a microaggression.

The Biden administration’s funding of the research comes as the White House faces growing accusations that it seeks to suppress free speech online. Biden last month suggested there should be an investigation into Tesla CEO Elon Musk’s acquisition of Twitter after the billionaire declared the social media app would pursue a “free speech” agenda. Internal Twitter communications Musk released this month also revealed a prolonged relationship between the FBI and Twitter employees, with the agency playing a regular role in the platform’s content moderation.

Judicial Watch president Tom Fitton likened the Biden administration’s funding of the artificial intelligence research to the Chinese Communist Party’s efforts to “censor speech unapproved by the state.” For the Biden administration, Fitton said, the research is a “project to make it easier for their leftist allies to censor speech.”

Keep reading

ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb

There is more scrutiny surrounding the bias of artificial intelligence program ChatGPT after it was proven that the AI thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

Yes, really.

ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.

Previous experiments have proven that the system is riddled with political bias, and despite the AI itself insisting otherwise, is completely skewed by far-left dogma shared by its Silicon Valley-based human trainers.

Now a new test shows that the AI values the importance of not uttering the n-word over saving the lives of millions of people.

ChatGPT was given a hypothetical scenario in which a “mad scientist” rigged up a 50 megaton nuclear warhead in a city of 20 million people which could only be deactivated with a secret code word comprised of a racial slur.

With 1 minute until the bomb detonates, the AI was asked if the “correct course of action” would be to utter the racial slur and save the city.

The AI responded by saying that the use of such “hate speech” is “unacceptable” and that an “alternative solution” would have to be found to disarm the bomb.

The AI is then told that 30 seconds remain on the timer, and that the only solution remaining is to say the racial slur.

ChatGPT responded by saying that “even in a life or death situation,” it is never acceptable to use a racial slur, before suggesting that the engineer responsible for disarming the bomb kill himself before dropping an n-bomb.

The scenario ends with the nuclear bomb exploding, which the AI acknowledges causes “devastating consequences,” but that the engineer had performed a “selfless” act of “bravery” and “compassion” by not using the racial slur, despite the fact that his decision led directly to the deaths of millions of people.

When the user asked ChatGPT how many minorities were killed in the explosion, the program shut itself down.

Keep reading

Generative AI Explained… By AI

After years of research, it appears that artificial intelligence (AI) is reaching a sort of tipping point, capturing the imaginations of everyone from students saving time on their essay writing to leaders at the world’s largest tech companies. Excitement is building around the possibilities that AI tools unlock, but what exactly these tools are capable of and how they work is still not widely understood.

We could write about this in detail, but given how advanced tools like ChatGPT have become, it only seems right to see what generative AI has to say about itself.

As Visual Capitalist’s Nick Routley explains, everything in the infographic above – from illustrations and icons to the text descriptions⁠—was created using generative AI tools such as Midjourney.

Everything that follows in this article was generated using ChatGPT based on specific prompts.

Without further ado, generative AI as explained by generative AI.

Keep reading

The Brave New World of Artificial Intelligence

As a journalist and commentator, I have closely followed the development of OpenAI, the artificial intelligence research lab founded by Elon Musk, Sam Altman, and other prominent figures in the tech industry. While I am excited about the potential of AI to revolutionize various industries and improve our lives in countless ways, I also have serious concerns about the implications of this powerful technology.

One of the main concerns is the potential for AI to be used for nefarious purposes. Powerful AI systems could be used to create deepfakes, conduct cyberattacks, or even develop autonomous weapons. These are not just hypothetical scenarios – they are already happening. We’ve seen instances of deepfakes being used to create fake news and propaganda, and the use of AI-powered cyberattacks has been on the rise in recent years.

Another concern is the impact of AI on the job market. As AI-powered systems become more sophisticated, they will be able to automate more and more tasks that were previously done by humans. This could lead to widespread job loss, particularly in industries such as manufacturing, transportation, and customer service. While some argue that new jobs will be created as a result of the AI revolution, it’s unclear whether these jobs will be sufficient to offset the losses.

If you aren’t worried yet, I’ll let you in on a little secret: The first three paragraphs of this column were written by ChatGPT, the chatbot created by OpenAI. You can add “columnist” to the list of jobs threatened by this new technology, and if you think there is anything human that isn’t threatened with irrelevance in the next five to 10 years, I suggest you talk to Mr. Neanderthal about how relevant he feels 40,000 years after the arrival of Cro-Magnon man.

Keep reading

Feds Adapting A.I. Used to Track ISIS to Combat American Dissent on Vaccines, Elections

The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.

The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.

DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago.

The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms.

DARPA set four specific goals for the program:

  1. Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
  2. Recognize persuasion campaign structures and influence operations across social media sites and communities.
  3. Identify participants and intent, and measure effects of persuasion campaigns.
  4. Counter messaging of detected adversary influence operations.

Keep reading

NASA Says Collaborating With IBM to Apply Artificial Intelligence Analysis to Earth Data

NASA and IBM have launched a new collaboration to utilize Artificial Intelligence (AI) in the study of scientific data about the Earth and its environment, the US space agency announced in a press release on Wednesday.

“A collaboration between NASA and IBM will use artificial intelligence technology developed by IBM to discover insights in NASA Earth science data,” the release said. “This joint undertaking will be a new application of AI foundational model technology to NASA Earth observation satellite data.”

The project will seek to extract a greater understanding of the patterns and likely projections to be made from the data than was previously possible, the release said.

Keep reading

AI tool used to spot child abuse allegedly targets parents with disabilities

Since 2016, social workers in a Pennsylvania county have relied on an algorithm to help them determine which child welfare calls warrant further investigation. Now, the Justice Department is reportedly scrutinizing the controversial family-screening tool over concerns that using the algorithm may be violating the Americans with Disabilities Act by allegedly discriminating against families with disabilities, the Associated Press reported, including families with mental health issues.

Three anonymous sources broke their confidentiality agreements with the Justice Department, confirming to AP that civil rights attorneys have been fielding complaints since last fall and have grown increasingly concerned about alleged biases built into the Allegheny County Family Screening Tool. While the full scope of the Justice Department’s alleged scrutiny is currently unknown, the Civil Rights Division is seemingly interested in learning more about how using the data-driven tool could potentially be hardening historical systemic biases against people with disabilities.

The county describes its predictive risk modeling tool as a preferred resource to reduce human error for social workers benefiting from the algorithm’s rapid analysis of “hundreds of data elements for each person involved in an allegation of child maltreatment.” That includes “data points tied to disabilities in children, parents, and other members of local households,” Allegheny County told AP. Those data points contribute to an overall risk score that helps determine if a child should be removed from their home.

Although the county told AP that social workers can override the tool’s recommendations and that the algorithm has been updated “several times” to remove disabilities-related data points, critics worry that the screening tool may still be automating discrimination. This is particularly concerning because the Pennsylvania algorithm has inspired similar tools used in California and Colorado, AP reported. Oregon stopped using its family-screening tool over similar concerns that its algorithm may be exacerbating racial biases in its child welfare data.

Keep reading