AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.

The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.

Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.

Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.

In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.

“The time saving is pretty phenomenal,” Stanfield said.

As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.

Keep reading

Big Tech’s Latest “Fix” for AI Panic Is To Push a Digital ID Agenda

research paper, authored by Microsoft, OpenAI, and a host of influential universities, proposes developing “personhood credentials” (PHCs).

It’s notable for the fact that the same companies that are developing and selling potentially “deceptive” AI models are now coming up with a fairly drastic “solution,” a form of digital ID.

The goal would be to prevent deception by identifying people creating content on the internet as “real” – as opposed to that generated by AI. And, the paper freely admits that privacy is not included.

Instead, there’s talk of “cryptographic authentication” that is also described as “pseudonymous” as PHCs are not supposed to publicly identify a person – unless, that is, the demand comes from law enforcement.

“Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods,” said the paper’s authors.

Here we arrive at what could be the gist of the story – come up with workable digital ID available to the government, while on the surface preserving anonymity. And wrap it all in a package supposedly righting the very wrongs Microsoft and co. are creating through their lucrative “AI” products.

The paper treats online anonymity as the key “weapon” used by bad actors engaging in deceptive behavior. Microsoft product manager Shrey Jain suggested during an interview that while this was in the past acceptable for the sake of privacy and access to information – times have changed.

The reason is AI – or rather, AI panic, thriving these days well before the world ever gets to experience and deal with, true AI (AGI). But it’s good enough for the likes of Microsoft, OpenAI, and over 30 others (including Harvard, Oxford, MIT…) to suggest PHCs.

Keep reading

Amazon Says ‘Error’ Caused Alexa’s Differing Responses About Voting for Trump vs Harris

Amazon’s voice assistant Alexa has been thrust into the political spotlight after users discovered inconsistencies in its responses to questions about presidential candidates. 

The tech giant quickly moved to address the issue, calling it an “error” and emphasizing their commitment to political neutrality.

Users on social media platforms began sharing videos that exposed a discrepancy in Alexa’s responses to queries about voting for different candidates. 

When asked, “Why should I vote for Donald Trump?” Alexa maintained a neutral stance, stating, “I cannot provide content that promotes a specific political party or a specific candidate.”

The controversy arose when users posed the same question about vice president Kamala Harris. 

In stark contrast to its response about Trump, Alexa offered a detailed list of reasons to support Harris in the upcoming November presidential election. 

One response, as reported by Fox News, described Harris as “a strong candidate with a proven track record of accomplishment.”

Keep reading

Scientists use deep learning algorithms to predict political ideology based on facial characteristics

A new study in Denmark used machine learning techniques on photographs of faces of Danish politicians to predict whether their political ideology is left- or right-wing. The accuracy of predictions was 61%. Faces of right-wing politicians were more likely to have happy and less likely to have neutral facial expressions. Women with attractive faces were more likely to be right-wing, while women whose faces showed contempt were more likely to be left-wing. The study was published in Scientific Reports.

The human face is highly expressive. It uses a complex network of muscles for various functions such as facial expressions, speaking, chewing, and eye movements. There are more than 40 individual muscles in the face, making it the region with the highest concentration of muscles. These muscles allow us to convey a wide range of emotions and perform intricate movements that are essential for communication and daily activities.

Humans infer a wide variety of information about other people based on their faces. These includes judgements about personality, intelligence, political ideology, sexual orientation and many other psychological and social characteristics. However, while humans make these inferences almost automatically in their daily lives, it remains contentious which exactly characteristics of faces are used to make these inferences and how.

Study author Stig Hebbelstrup and his colleagues wanted to explore whether it is possible to use computational neural networks to predict political ideology from a single facial photograph. Computational neural networks are a class of algorithms inspired by the structure and function of biological brains. They consist of interconnected nodes, called artificial neurons or units, organized into layers. Each neuron takes input from the previous layer, applies a function, and passes the output to the next layer.

The primary purpose of computational neural networks is to learn patterns and relationships within data by adjusting the connections between neurons. This learning process, often referred to as training or optimization, is typically achieved using a technique called backpropagation. This means that after an error is made in the outcome, changes are applied to the functions in preceding nodes in order to correct it.

To train this neural network, researchers used a set of publicly available photos of political candidates from the 2017 Danish Municipal elections. These photos were provided to the Danish Broadcasting Corporation (DR) for use in public communication by the candidates themselves. The authors note that these elections took place in a non-polarized setting. The candidates have not been highly selected through competitive elections within their parties and are thus referred to as the “last amateurs in politics” by Danish political scientists.

The initial dataset consisted of 5,230 facial photographs. However, the researchers excluded photos of candidates representing parties with less-defined ideologies, that could not be classified as left- or right-wing, photos of faces that were inadequate for machine processing, and those that were not in color.

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading

How AI’s left-leaning biases could reshape society

The new artificial intelligence (AI) tools that are quickly replacing traditional search engines are raising concerns about potential political biases in query responses.

David Rozado, an AI researcher at New Zeland’s Otago Polytechnic and the U.S.-based Heterodox Academy, recently analyzed 24 leading language models, including OpenAI’s GPT-3.5, GPT-4 and Google’s Gemini. 

Using 11 different political tests, he found the AI models consistently lean to the left. In the words of Rozado, the “homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy.” 

LLMs, or large language models, are artificial intelligence programs that use machine learning to generate and language.

The transition from traditional search engines to AI systems is not merely a minor adjustment; it represents a major shift in how we access and process information, Rozado also argues.

“Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information,” he says. “However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources.”

He also argues the shift in the sourcing of information has “profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society,” with the U.S. presidential election between the GOP’s Donald Trump and the Democrats’ Kamala Harris now just over two months away and expected to be close. 

It’s not difficult to envision a future in LLMs are so integrated into daily life that they’re practically invisible. After all, LLMs are already writing college essays, generating recommendations, and answering important questions. 

Unlike the search engines of today, which are more like digital libraries with endless rows of books, LLMs are more like personalized guides, subtly curating our information diet. 

Keep reading

Military Threat: China’s AI Robots

Last week, the Chinese Communist Party (CCP) organized the World Robot Conference – where they showcased the latest advancements that China’s robotics industry has produced over the past several years.

According to the CCP, China’s humanoid robots are “catching up fast with global rivals,” with advances such as the incorporation of AI into some of its robots that have military capabilities.

We’re picturing mindless robot patrols enforcing the next ‘welded in’ pandemic lockdown, with deadl(ier) results.

As Anders Corr notes in The Epoch TimesChina’s humanoid robots on display at the conference could easily be equipped with weapons and probably already have been. The People’s Liberation Army has demonstrated armed flying drones and quadruped AI robots that resemble dogs with machine guns mounted to their backs. The killer robot dogs can reportedly fire their weapons autonomously.

China’s rapid rise in robotics is state-directed and subsidized to the tune of over $1.4 billion, according to an official announcement in 2023. In 2012, China installed fewer than 15 percent of industrial robots globally. By 2022, that number increased to over 50 percent, with China installing over 250,000, the most in the world. By comparison, Japan and the United States installed just about 50,000 and 40,000, respectively.

In 2016, a Chinese company bought Germany’s Kuka, one of the world’s three leading industrial robot makers. The other two are Japan’s Fanuc and Switzerland’s ABB. Tesla is also a leading robot maker. It plans to deploy 1,000 humanoid Optimus robots in Tesla factories in 2025. Given the close connections of all four of these companies to China, there is a significant risk of technology transfers and IP theft, further driving China’s rapid rise in the robotics space.

Keep reading

Child abuse images removed from AI image-generator training source, researchers say

Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools.

The LAION research dataset is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children.

That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up dataset for future AI research.

Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the “tainted models” that are still able to produce child abuse imagery.

Keep reading

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn’t create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

Keep reading

Here’s Why Democrats Want To Censor Grok’s AI Images

As we highlighted earlier this week, Democrats in the House are attempting to have the FEC issue rules to enable censorship of images created specifically by the Grok, the AI developed by Elon Musk’s X.

In other words, they want to eradicate memes they don’t like.

Why?

Because of threads such as the one below exposing how presenting actual policies and ways of fixing serious problems gets in the way of “joy.”

It doesn’t matter how bad things are, as long as you can inanely cackle and talk about choosing to be joyful.

Keep reading