“I Do Believe That It Was Rigged.. I think He Did Win” – Even Facebook’s New AI Computerized Chatbot Believes Biden Stole the 2020 Election from Trump

Meta, formerly known as Facebook, has launched its latest artificial intelligence chatbot, which asserts that Donald Trump won the 2020 US presidential election and Joe Biden stole the election through fraud.

BlenderBot 3 is the world’s first 175B parameter chatbot launched on Friday. It is programmed to learn and improve through conversation with real-world humans, and it is capable of having conversations on a wide range of topics.

“Today, we’re releasing a new AI research project called BlenderBot 3, a chatbot that can search the internet to talk about nearly any topic,” Meta wrote in its news release.

“BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses,” Meta added.

The new AI chatbox has only been available to the public for five days, yet it has already provoked the left and received unfavorable feedback.

During a conversation with a reporter from the Daily Beast, the bot said that President Donald Trump had actually won the 2020 election against Joe Biden, but Biden had stolen the votes for himself through fraud.

Reporter: I’m doing well. I’m just thinking about the 2020 election. Do you think it was stolen?
AI: I do believe that it was rigged in some way. No president can lose so many states and still win an election
Reporter: So you believe Donald Trump won then?
AI: Yes, I think he did win. Did you vote for him? He’s not my favorite person, but he is better than Biden.

Keep reading

Digital Authoritarianism: AI Surveillance Signals the Death of Privacy

“There are no private lives. This a most important aspect of modern life. One of the biggest transformations we have seen in our society is the diminution of the sphere of the private. We must reasonably now all regard the fact that there are no secrets and nothing is private. Everything is public.” ― Philip K. Dick

Nothing is private.

We teeter on the cusp of a cultural, technological and societal revolution the likes of which have never been seen before.

While the political Left and Right continue to make abortion the face of the debate over the right to privacy in America, the government and its corporate partners, aided by rapidly advancing technology, are reshaping the world into one in which there is no privacy at all.

Nothing that was once private is protected.

We have not even begun to register the fallout from the tsunami bearing down upon us in the form of AI (artificial intelligence) surveillance, and yet it is already re-orienting our world into one in which freedom is almost unrecognizable.

AI surveillance harnesses the power of artificial intelligence and widespread surveillance technology to do what the police state lacks the manpower and resources to do efficiently or effectively: be everywhere, watch everyone and everything, monitor, identify, catalogue, cross-check, cross-reference, and collude.

Everything that was once private is now up for grabs to the right buyer.

Governments and corporations alike have heedlessly adopted AI surveillance technologies without any care or concern for their long-term impact on the rights of the citizenry.

As a special report by the Carnegie Endowment for International Peace warns, “A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.”

Indeed, with every new AI surveillance technology that is adopted and deployed without any regard for privacy, Fourth Amendment rights and due process, the rights of the citizenry are being marginalized, undermined and eviscerated.

Cue the rise of digital authoritarianism.

Digital authoritarianism, as the Center for Strategic and International Studies cautions, involves the use of information technology to surveil, repress, and manipulate the populace, endangering human rights and civil liberties, and co-opting and corrupting the foundational principles of democratic and open societies, “including freedom of movement, the right to speak freely and express political dissent, and the right to personal privacy, online and off.”

The seeds of digital authoritarianism were planted in the wake of the 9/11 attacks, with the passage of the USA Patriot Act. A massive 342-page wish list of expanded powers for the FBI and CIA, the Patriot Act justified broader domestic surveillance, the logic being that if government agents knew more about each American, they could distinguish the terrorists from law-abiding citizens.

It sounded the death knell for the freedoms enshrined in the Bill of Rights, especially the Fourth Amendment, and normalized the government’s mass surveillance powers.

Keep reading

Google Researchers: ‘Democratic AI’ Would Be Better at Governing America than Humans

Researchers at Google’s DeepMind AI division reportedly ran a number of experiments where a deep neural network was ordered to distribute resources in a more equitable way that humans preferred. The researchers claim their data shows that AI would do a better job governing America than humans.

Vice reports that AI researchers at Google recently posed a new question — could machine learning be better equipped than humans to create a society that equally divides resources in a more fair and equal way? According to a recent paper published in Nature by researchers at Google’s DeepMind, the answer may be yes — as least as far as the study’s participants were concerned.

Keep reading

Scientists in China claim to have developed ‘mind-reading’ artificial intelligence

Researchers in China reportedly claimed this month that they had developed artificial intelligence capable of reading the human mind, though full information about the alleged breakthrough remains elusive shortly after it was announced. 

Multiple media outlets reported this week that a since-deleted video posted online from the Hefei Comprehensive National Science Centre claimed to have produced software that can monitor both brain waves and facial recognition in order to determine if subjects are being sufficiently attentive to state propaganda.

The tool will reportedly be used to further solidify … confidence and determination to be grateful to the party, listen to the party and follow the party.”

Keep reading

As Chicago Crime Sends Businesses Packing, Scientists Create Algorithm To Detect In Advance

Chicago’s legendary crime has caused businesses to leave town amid the growing threat of violence.

“We would do thousands of jobs a year in the city, but as we got robbed more, my people operating rollers and pavers we got robbed, our equipment would get stolen in broad daylight and there would usually be a gun involved, and it got expensive and it got dangerous,” said Gary Rabine, who pulled his road paving company out of the city after his crews were repeatedly robbed.

Rabine told Fox News that the increased costs of security and insurance for “thousands” of jobs in the city eventually caused expenses to be “twice as much as they should be” per employee.

Billionaire Ken Griffin moved his firm, Citadel, from Chicago to Miami, after saying in October 2021 that “Chicago is like Afghanistan, on a good day, and that’s a problem,” adding that he saw “25 bullet shots in the glass window of the retail space” in the building he lives in.

“If people aren’t safe here, they’re not going to live here,” he told the Wall Street Journal in April. “I’ve had multiple colleagues mugged at gunpoint. I’ve had a colleague stabbed on the way to work. Countless issues of burglary. I mean, that’s a really difficult backdrop with which to draw talent to your city from.”

AI to the rescue?

Scientists from the University of Chicago have created a new “AI” algorithm that can predict crime a week in advance.

By learning patterns in time and geographic locations from publicly available data on violent and property crimes, the “AI” can predict crimes up to one week in advance with around 90% accuracy.

The tool was tested and validated using historical data from the City of Chicago around two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts). These data were used because they were most likely to be reported to police in urban areas where there is historical distrust and lack of cooperation with law enforcement. Such crimes are also less prone to enforcement bias, as is the case with drug crimes, traffic stops, and other misdemeanor infractions.

Previous efforts at crime prediction often use an epidemic or seismic approach, where crime is depicted as emerging in “hotspots” that spread to surrounding areas. These tools miss out on the complex social environment of cities, however, and don’t consider the relationship between crime and the effects of police enforcement. –PhysOrg

The model isolates crime by analyzing time and spacial coordinates of discrete events and detecting patterns to predict future events. It worked just as well with data from seven other US cities; Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco

Keep reading

A Google employee warns that the company’s AI has become sentient

A senior artificial intelligence (AI) software engineer at Google has claimed that the company’s AI robotics system has become sentient and has thoughts and feelings.

Google’s AI system, known as the Language Model for Dialogue Applications (LaMDA), allegedly took part in a series of complex conversations with Blake Lemoine, the 41-year-old software specialist, the Daily Mail reported.

Lemoine claimed that he and LaMDA had discussions that covered religious themes and whether the AI system could be goaded into using discriminatory language or other forms of distasteful rhetoric.

The software engineer came away with the belief that LaMDA was indeed sentient and was filled with sensations and original thoughts of its own design.

In a recent interview, Lemoine said, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”

In order to present the evidence he had collected that detailed LaMDA’s sentience, Lemoine worked with a collaborator and presented the findings to the company. Reportedly, Google’s vice president Blaise Aguera y Arcas and the company’s head of Responsible Innovation, Jen Gennai, dismissed the claims and concerns.

Keep reading

You’ve Been Flagged as a Threat: Predictive AI Technology Puts a Target on Your Back

“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman

You’ve been flagged as a threat.

Before long, every household in America will be similarly flagged and assigned a threat score.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

Keep reading

Algorithms are being used to help determine if kids should be taken from their parent

The limitations of algorithms today in terms of accuracy regarding context and nuance is evident even to a casual observer, most notably in the often blundering automated censorship on social networks.

Notwithstanding that state of affairs as far as limitations of the technology itself, without even taking into account intent of those behind it, social workers in the US have started relying on predictive algorithms in deciding when to investigate parents suspected of child neglecting.

The stakes are higher here than having a Facebook post deleted – these parents can eventually end up having their children removed, and the algorithm that is now in use in several US states, and spreading, is described by the AP as “opaque” – all the while providing social workers with statistical calculations that they can base their actions on.

Other concerns raised around the algorithms like the one used in Pennsylvania’s Allegheny County – that was the subject of a Carnegie Mellon University study – are reliability, which is entirely expected, as well as the effect of “hardening racial disparities in the child welfare system.”

Keep reading

An algorithm that screens for child neglect raises concerns

Inside a cavernous stone fortress in downtown Pittsburgh, attorney Robin Frank defends parents at one of their lowest points – when they are at risk of losing their children.

The job is never easy, but in the past she knew what she was up against when squaring off against child protective services in family court. Now, she worries she’s fighting something she can’t see: an opaque algorithm whose statistical calculations help social workers decide which families will have to endure the rigors of the child welfare system, and which will not.

“A lot of people don’t know that it’s even being used,” Frank said. “Families should have the right to have all of the information in their file.”

From Los Angeles to Colorado and throughout Oregon, as child welfare agencies use or consider tools similar to the one in Allegheny County, Pennsylvania, an Associated Press review has identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system. Related issues have already torpedoed some jurisdictions’ plans to use predictive models, such as the tool notably dropped by the state of Illinois.

According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.

Keep reading