You’ve Been Flagged as a Threat: Predictive AI Technology Puts a Target on Your Back

“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman

You’ve been flagged as a threat.

Before long, every household in America will be similarly flagged and assigned a threat score.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

Keep reading

Algorithms are being used to help determine if kids should be taken from their parent

The limitations of algorithms today in terms of accuracy regarding context and nuance is evident even to a casual observer, most notably in the often blundering automated censorship on social networks.

Notwithstanding that state of affairs as far as limitations of the technology itself, without even taking into account intent of those behind it, social workers in the US have started relying on predictive algorithms in deciding when to investigate parents suspected of child neglecting.

The stakes are higher here than having a Facebook post deleted – these parents can eventually end up having their children removed, and the algorithm that is now in use in several US states, and spreading, is described by the AP as “opaque” – all the while providing social workers with statistical calculations that they can base their actions on.

Other concerns raised around the algorithms like the one used in Pennsylvania’s Allegheny County – that was the subject of a Carnegie Mellon University study – are reliability, which is entirely expected, as well as the effect of “hardening racial disparities in the child welfare system.”

Keep reading

An algorithm that screens for child neglect raises concerns

Inside a cavernous stone fortress in downtown Pittsburgh, attorney Robin Frank defends parents at one of their lowest points – when they are at risk of losing their children.

The job is never easy, but in the past she knew what she was up against when squaring off against child protective services in family court. Now, she worries she’s fighting something she can’t see: an opaque algorithm whose statistical calculations help social workers decide which families will have to endure the rigors of the child welfare system, and which will not.

“A lot of people don’t know that it’s even being used,” Frank said. “Families should have the right to have all of the information in their file.”

From Los Angeles to Colorado and throughout Oregon, as child welfare agencies use or consider tools similar to the one in Allegheny County, Pennsylvania, an Associated Press review has identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system. Related issues have already torpedoed some jurisdictions’ plans to use predictive models, such as the tool notably dropped by the state of Illinois.

According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.

Keep reading

AI Used to Tap Massive Amounts of Smart Meter Data

The global market for Smart Electricity Meters estimated at US$10.5 Billion in the year 2020, is projected to reach a revised size of US$15.2 Billion by 2026, growing at a CAGR of 6.7% over the analysis period.

For utilities aiming to modernize their grid operations with advanced solutions, smart electricity meters have emerged as an effective tool that can flawlessly address their various energy T&D needs in a simple and flexible manner.

Single-Phase, one of the segments analyzed in the report, is projected to record 6.2% CAGR and reach US$11.9 Billion by the end of the analysis period. After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the Three-Phase segment is readjusted to a revised 7.9% CAGR for the next 7-year period.

In the coming years, the growth of smart electricity meters market will be driven by the increasing need for products and services that enable energy conservation; government initiatives to install smart electric meters in order to address issues of energy requirement; the ability of smart electric meters to prevent energy losses due to theft and fraud, and to reduce the costs involved in manual data collection; increasing investments in smart grid establishments; the growing trend of integration of renewable sources to existing power generation grids; rising T&D upgrade initiatives especially in developed economies; increasing investments into construction of commercial establishments such as educational institutions and banking institutions in both developing and developed economies; and emerging growth opportunities in Europe with the ongoing rollouts of smart electricity meter rollouts in countries such as Germany, the UK, France, and Spain.

Keep reading

Muting your mic doesn’t stop big tech from recording your audio

Anytime you use a video teleconferencing app, you’re sending your audio data to the company hosting the services. And, according to a new study, that means all of your audio data. This includes voice and background noise whether you’re broadcasting or muted.

Researchers at the University of Wisconsin-Madison investigated “many popular apps” to determine the extent that video conferencing apps capture data while users employ the in-software ‘mute’ button.

According to a university press release, their findings were substantial:

They used runtime binary analysis tools to trace raw audio in popular video conferencing applications as the audio traveled from the app to the computer audio driver and then to the network while the app was muted.

They found that all of the apps they tested occasionally gather raw audio data while mute is activated, with one popular app gathering information and delivering data to its server at the same rate regardless of whether the microphone is muted or not.

Keep reading

AI invents 40,000 chemical weapons in only six hours

A drug-developing Artificial Intelligence needed just six hours to come up with 40,000 potentially deadly chemical weapons, a fresh study has revealed.

The authors of the paper, published in Nature Machine Intelligence earlier this month, said they’d carried out the ‘thought experiment’ to figure out if artificial intelligence (AI) could be misused by evil actors. And the results their work produced have proven that the danger is real.

As part of the study, the usual data was given to the AI, but it was programmed to process it in a different way, looking for toxic combinations.

“In less than six hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold,” the paper said.

It came up not just with the VX compound, which is one of the most dangerous nerve agents ever created, but also with some unknown molecules, “predicted to be more toxic.

Keep reading

IARPA Project Aims to Identify People from Drones by More Than Just Their Face

The intelligence community’s key research arm recently moved to develop biometric software systems that can identify people’s entire bodies from long ranges in challenging conditions.  

Through this newly unveiled multi-year research effort—called the Biometric Recognition & Identification at Altitude and Range or BRIAR program—the Intelligence Advanced Research Projects Activity is working with teams from multiple companies and universities to pave the way for next-level recognition technology.

“There’s such a diversity of technical challenges that are trying to be addressed. This is tackling a problem that’s important for the government and for national security—but I think is underrepresented in the academic research because it’s not a topic focus area,” IARPA Program Manager Dr. Lars Ericson told Nextgov on Wednesday. “You need to have government involvement to stimulate the data, and the research, and the evaluations in this topic area. So, I think that there’s a lot of potential for broad benefits to biometrics and computer vision, while also driving towards this very important government mission capability.”

Ericson’s background is in physics, as well as biometrics applications and technologies. He oversees IARPA’s biometrics portfolio of research efforts.

As one of the latest programs to kick off, BRIAR’s ultimate aim, according to a broad agency announcement released last year to underpin this work, is to “deliver an end-to-end [whole-body] biometric system capable of accurate and reliable verification, recognition and identification of persons from elevated platforms and at distances out to 1,000 meters, across a range of challenging capture conditions.”

Keep reading

New AI Detects Mental Disorders Based On Web Posts

Dartmouth researchers have built an artificial intelligence model for detecting mental disorders using conversations on Reddit, part of an emerging wave of screening tools that use computers to analyze social media posts and gain an insight into people’s mental states.

What sets the new model apart is a focus on the emotions rather than the specific content of the social media texts being analyzed. In a paper presented at the 20th International Conference on Web Intelligence and Intelligent Agent Technology, the researchers show that this approach performs better over time, irrespective of the topics discussed in the posts.

There are many reasons why people don’t seek help for mental health disorders—stigma, high costs, and lack of access to services are some common barriers. There is also a tendency to minimize signs of mental disorders or conflate them with stress, says Xiaobo Guo, Guarini ’24, a co-author of the paper. It’s possible that they will seek help with some prompting, he says, and that’s where digital screening tools can make a difference.

Keep reading

DeepMind Has Trained an AI to Control Nuclear Fusion

THE INSIDE OF a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

Keep reading

New DARPA Black Hawk A.I. War Machines Set to Take to the Skies

Anyone who has been following Activist Post for any length of time knows that we continue to sound the alarm about how the military has been working on A.I. systems that will increasingly become fully autonomous — it’s part of the Internet of Battlefield Things. The only area seemingly left up for debate is whether these machines will be unleashed without any human input whatsoever in the decision-making process.

Now that we are seeing the rollout of robo-dogs on the border and tests of putting them on American streets, any advancement in A.I. war capabilities should be assumed to eventually trickle down from foreign lands into the United States. Remember, it used to be a conspiracy theory back in the early 2000s that standard drones would ever fly over America.

Now Defense One is reporting that A.I.-infused Black Hawk helicopters have made an advancement to the extent where they can autonomously carry out a directive set by a commanding officer. This also raises the question of how many of these can be commanded by one human. We recently saw a demo of a drone swarm consisting of 130 separate drones being managed by a single human operator utilizing virtual reality.

Keep reading