Scientists in China claim to have developed ‘mind-reading’ artificial intelligence

Researchers in China reportedly claimed this month that they had developed artificial intelligence capable of reading the human mind, though full information about the alleged breakthrough remains elusive shortly after it was announced. 

Multiple media outlets reported this week that a since-deleted video posted online from the Hefei Comprehensive National Science Centre claimed to have produced software that can monitor both brain waves and facial recognition in order to determine if subjects are being sufficiently attentive to state propaganda.

The tool will reportedly be used to further solidify … confidence and determination to be grateful to the party, listen to the party and follow the party.”

Keep reading

As Chicago Crime Sends Businesses Packing, Scientists Create Algorithm To Detect In Advance

Chicago’s legendary crime has caused businesses to leave town amid the growing threat of violence.

“We would do thousands of jobs a year in the city, but as we got robbed more, my people operating rollers and pavers we got robbed, our equipment would get stolen in broad daylight and there would usually be a gun involved, and it got expensive and it got dangerous,” said Gary Rabine, who pulled his road paving company out of the city after his crews were repeatedly robbed.

Rabine told Fox News that the increased costs of security and insurance for “thousands” of jobs in the city eventually caused expenses to be “twice as much as they should be” per employee.

Billionaire Ken Griffin moved his firm, Citadel, from Chicago to Miami, after saying in October 2021 that “Chicago is like Afghanistan, on a good day, and that’s a problem,” adding that he saw “25 bullet shots in the glass window of the retail space” in the building he lives in.

“If people aren’t safe here, they’re not going to live here,” he told the Wall Street Journal in April. “I’ve had multiple colleagues mugged at gunpoint. I’ve had a colleague stabbed on the way to work. Countless issues of burglary. I mean, that’s a really difficult backdrop with which to draw talent to your city from.”

AI to the rescue?

Scientists from the University of Chicago have created a new “AI” algorithm that can predict crime a week in advance.

By learning patterns in time and geographic locations from publicly available data on violent and property crimes, the “AI” can predict crimes up to one week in advance with around 90% accuracy.

The tool was tested and validated using historical data from the City of Chicago around two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts). These data were used because they were most likely to be reported to police in urban areas where there is historical distrust and lack of cooperation with law enforcement. Such crimes are also less prone to enforcement bias, as is the case with drug crimes, traffic stops, and other misdemeanor infractions.

Previous efforts at crime prediction often use an epidemic or seismic approach, where crime is depicted as emerging in “hotspots” that spread to surrounding areas. These tools miss out on the complex social environment of cities, however, and don’t consider the relationship between crime and the effects of police enforcement. –PhysOrg

The model isolates crime by analyzing time and spacial coordinates of discrete events and detecting patterns to predict future events. It worked just as well with data from seven other US cities; Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco

Keep reading

A Google employee warns that the company’s AI has become sentient

A senior artificial intelligence (AI) software engineer at Google has claimed that the company’s AI robotics system has become sentient and has thoughts and feelings.

Google’s AI system, known as the Language Model for Dialogue Applications (LaMDA), allegedly took part in a series of complex conversations with Blake Lemoine, the 41-year-old software specialist, the Daily Mail reported.

Lemoine claimed that he and LaMDA had discussions that covered religious themes and whether the AI system could be goaded into using discriminatory language or other forms of distasteful rhetoric.

The software engineer came away with the belief that LaMDA was indeed sentient and was filled with sensations and original thoughts of its own design.

In a recent interview, Lemoine said, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”

In order to present the evidence he had collected that detailed LaMDA’s sentience, Lemoine worked with a collaborator and presented the findings to the company. Reportedly, Google’s vice president Blaise Aguera y Arcas and the company’s head of Responsible Innovation, Jen Gennai, dismissed the claims and concerns.

Keep reading

You’ve Been Flagged as a Threat: Predictive AI Technology Puts a Target on Your Back

“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman

You’ve been flagged as a threat.

Before long, every household in America will be similarly flagged and assigned a threat score.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

Keep reading

Algorithms are being used to help determine if kids should be taken from their parent

The limitations of algorithms today in terms of accuracy regarding context and nuance is evident even to a casual observer, most notably in the often blundering automated censorship on social networks.

Notwithstanding that state of affairs as far as limitations of the technology itself, without even taking into account intent of those behind it, social workers in the US have started relying on predictive algorithms in deciding when to investigate parents suspected of child neglecting.

The stakes are higher here than having a Facebook post deleted – these parents can eventually end up having their children removed, and the algorithm that is now in use in several US states, and spreading, is described by the AP as “opaque” – all the while providing social workers with statistical calculations that they can base their actions on.

Other concerns raised around the algorithms like the one used in Pennsylvania’s Allegheny County – that was the subject of a Carnegie Mellon University study – are reliability, which is entirely expected, as well as the effect of “hardening racial disparities in the child welfare system.”

Keep reading

An algorithm that screens for child neglect raises concerns

Inside a cavernous stone fortress in downtown Pittsburgh, attorney Robin Frank defends parents at one of their lowest points – when they are at risk of losing their children.

The job is never easy, but in the past she knew what she was up against when squaring off against child protective services in family court. Now, she worries she’s fighting something she can’t see: an opaque algorithm whose statistical calculations help social workers decide which families will have to endure the rigors of the child welfare system, and which will not.

“A lot of people don’t know that it’s even being used,” Frank said. “Families should have the right to have all of the information in their file.”

From Los Angeles to Colorado and throughout Oregon, as child welfare agencies use or consider tools similar to the one in Allegheny County, Pennsylvania, an Associated Press review has identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system. Related issues have already torpedoed some jurisdictions’ plans to use predictive models, such as the tool notably dropped by the state of Illinois.

According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.

Keep reading

AI Used to Tap Massive Amounts of Smart Meter Data

The global market for Smart Electricity Meters estimated at US$10.5 Billion in the year 2020, is projected to reach a revised size of US$15.2 Billion by 2026, growing at a CAGR of 6.7% over the analysis period.

For utilities aiming to modernize their grid operations with advanced solutions, smart electricity meters have emerged as an effective tool that can flawlessly address their various energy T&D needs in a simple and flexible manner.

Single-Phase, one of the segments analyzed in the report, is projected to record 6.2% CAGR and reach US$11.9 Billion by the end of the analysis period. After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the Three-Phase segment is readjusted to a revised 7.9% CAGR for the next 7-year period.

In the coming years, the growth of smart electricity meters market will be driven by the increasing need for products and services that enable energy conservation; government initiatives to install smart electric meters in order to address issues of energy requirement; the ability of smart electric meters to prevent energy losses due to theft and fraud, and to reduce the costs involved in manual data collection; increasing investments in smart grid establishments; the growing trend of integration of renewable sources to existing power generation grids; rising T&D upgrade initiatives especially in developed economies; increasing investments into construction of commercial establishments such as educational institutions and banking institutions in both developing and developed economies; and emerging growth opportunities in Europe with the ongoing rollouts of smart electricity meter rollouts in countries such as Germany, the UK, France, and Spain.

Keep reading

Muting your mic doesn’t stop big tech from recording your audio

Anytime you use a video teleconferencing app, you’re sending your audio data to the company hosting the services. And, according to a new study, that means all of your audio data. This includes voice and background noise whether you’re broadcasting or muted.

Researchers at the University of Wisconsin-Madison investigated “many popular apps” to determine the extent that video conferencing apps capture data while users employ the in-software ‘mute’ button.

According to a university press release, their findings were substantial:

They used runtime binary analysis tools to trace raw audio in popular video conferencing applications as the audio traveled from the app to the computer audio driver and then to the network while the app was muted.

They found that all of the apps they tested occasionally gather raw audio data while mute is activated, with one popular app gathering information and delivering data to its server at the same rate regardless of whether the microphone is muted or not.

Keep reading

AI invents 40,000 chemical weapons in only six hours

A drug-developing Artificial Intelligence needed just six hours to come up with 40,000 potentially deadly chemical weapons, a fresh study has revealed.

The authors of the paper, published in Nature Machine Intelligence earlier this month, said they’d carried out the ‘thought experiment’ to figure out if artificial intelligence (AI) could be misused by evil actors. And the results their work produced have proven that the danger is real.

As part of the study, the usual data was given to the AI, but it was programmed to process it in a different way, looking for toxic combinations.

“In less than six hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold,” the paper said.

It came up not just with the VX compound, which is one of the most dangerous nerve agents ever created, but also with some unknown molecules, “predicted to be more toxic.

Keep reading