Philip K. Dick Theorizes The Matrix in 1977, Declares That We Live in “A Computer-Programmed Reality”

In 1963, Philip K. Dick won the coveted Hugo Award for his novel The Man in the High Castle, beating out such sci-fi luminaries as Marion Zimmer Bradley and Arthur C. Clarke. Of the novel, The Guardian writes, “Nothing in the book is as it seems. Most characters are not what they say they are, most objects are fake.” The plot—an alternate history in which the Axis Powers have won World War II—turns on a popular but contraband novel called The Grasshopper Lies Heavy. Written by the titular character, the book describes the world of an Allied victory, and—in the vein of his worlds-within-worlds thematic—Dick’s novel suggests that this book-within-a-book may in fact describe the “real” world of the novel, or one glimpsed through the novel’s reality as at least highly possible.

The Man in the High Castle may be Dick’s most straightforwardly compelling illustration of the experience of alternate realties, but it is only one among very many. In an interview Dick gave while at the high profile Metz science fiction conference in France in 1977, he said that like David Hume’s description of the “intuitive type of person,” he lived “in terms of possibilities rather than in terms of actualities.” Dick also tells a parable of an ancient, complicated, and temperamental automated record player called the “Capard,” which reverted to varying states of destructive chaos. “This Capard,” Dick says, “epitomized an inscrutable ultra-sophisticated universe which was in the habit of doing unexpected things.”

In the interview, Dick roams over so many of his personal theories about what these “unexpected things” signify that it’s difficult to keep track. However, at that same conference, he delivered a talk titled “If You Find This World Bad, You Should See Some of the Others” (in edited form above), that settles on one particular theory—that the universe is a highly-advanced computer simulation. (The talk has circulated on the internet as “Did Philip K. Dick disclose the real Matrix in 1977?”).

The subject of this speech is a topic which has been discovered recently, and which may not exist all. I may be talking about something that does not exist. Therefore I’m free to say everything and nothing. I in my stories and novels sometimes write about counterfeit worlds. Semi-real worlds as well as deranged private worlds, inhabited often by just one person…. At no time did I have a theoretical or conscious explanation for my preoccupation with these pluriform pseudo-worlds, but now I think I understand. What I was sensing was the manifold of partially actualized realities lying tangent to what evidently is the most actualized one—the one that the majority of us, by consensus gentium, agree on.

Keep reading

Police Use of Artificial Intelligence: 2021 in Review

Decades ago, when imagining the practical uses of artificial intelligence, science fiction writers imagined autonomous digital minds that could serve humanity. Sure, sometimes a HAL 9000 or WOPR would subvert expectations and go rogue, but that was very much unintentional, right?

And for many aspects of life, artificial intelligence is delivering on its promise. AI is, as we speak, looking for evidence of life on Mars. Scientists are using AI to try to develop more accurate and faster ways to predict the weather.

But when it comes to policing, the actuality of the situation is much less optimistic.  Our HAL 9000 does not assert its own decisions on the world—instead, programs which claim to use AI for policing just reaffirm, justify, and legitimize the opinions and actions already being undertaken by police departments.

AI presents two problems,  tech-washing, and a classic feedback loop. Tech-washing is the process by which proponents of the outcomes can defend those outcomes as unbiased because they were derived from “math.” And the feedback loop is how that math continues to perpetuate historically-rooted harmful outcomes. “The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases,” as one philosopher of science notes.

Keep reading

Humanity’s Final Arms Race: UN Fails To Agree On ‘Killer Robot’ Ban

Autonomous weapon systems—commonly known as killer robots—may have killed human beings for the first time ever last yearaccording to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Keep reading

Amazon’s Alexa dishes out potentially deadly challenge

Amazon says it has updated its voice assistant after it transpired that Alexa had suggested a 10-year-old girl touch a coin to the prongs of a partially inserted plug as a challenge.

The girl’s mother posted a tweet on Monday describing how her daughter had been doing some cold-weather indoor challenges set by a phys. ed. teacher on YouTube and was seeking another one. To the woman’s shock, Alexa suggested a “simple” task it had found on the web, whereby the participant “plug[s] in a phone charger about halfway into a wall outlet, then touch[es] a penny to the exposed prongs.” 

The dangerous “penny challenge” started making the rounds on TikTok and other platforms about a year ago and can potentially lead to electric shock as well as cause a fire.

Keep reading

Human-like robot’s reaction ‘freaks out’ creators

A machine touted as the world’s most advanced humanoid robot “freaked out” its creators after it reacted with visible irritation and grabbed the hand of a researcher who got into its “personal space.”

A newly posted video demonstration of the interaction shows the robot, called ‘Ameca’, tracking a moving finger before furrowing its brow and leaning back as the person’s hand comes nearer. After the researcher pokes its nose, the robot then grabs their hand and moves it away.

The impressively life-like robot, which is being developed by British firm Engineered Arts, has been billed as the “future face of robotics” and “the perfect humanoid robot platform for human-robot interaction.”

Keep reading

New Artificial Intelligence System Enables Machines That See the World More Like Humans Do

A new “common-sense” approach to computer vision enables artificial intelligence that interprets scenes more accurately than other systems do.

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher — for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects.

The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene. Probabilistic inference allows the system to infer whether mismatches are likely due to noise or to errors in the scene interpretation that need to be corrected by further processing.

Keep reading

Artificial Intelligence Has Found an Unknown ‘Ghost’ Ancestor in The Human Genome

Nobody knows who she was, just that she was different: a teenage girl from over 50,000 years ago of such strange uniqueness she looked to be a ‘hybrid’ ancestor to modern humans that scientists had never seen before.

Only recently, researchers have uncovered evidence she wasn’t alone. In a 2019 study analysing the complex mess of humanity’s prehistory, scientists used artificial intelligence (AI) to identify an unknown human ancestor species that modern humans encountered – and shared dalliances with – on the long trek out of Africa millennia ago.

“About 80,000 years ago, the so-called Out of Africa occurred, when part of the human population, which already consisted of modern humans, abandoned the African continent and migrated to other continents, giving rise to all the current populations”, explained evolutionary biologist Jaume Bertranpetit from the Universitat Pompeu Fabra in Spain.

As modern humans forged this path into the landmass of Eurasia, they forged some other things too – breeding with ancient and extinct hominids from other species.

Up until recently, these occasional sexual partners were thought to include Neanderthals and Denisovans, the latter of which were unknown until 2010.

But in this study, a third ex from long ago was isolated in Eurasian DNA, thanks to deep learning algorithms sifting through a complex mass of ancient and modern human genetic code.

Using a statistical technique called Bayesian inference, the researchers found evidence of what they call a “third introgression” – a ‘ghost’ archaic population that modern humans interbred with during the African exodus.

“This population is either related to the Neanderthal-Denisova clade or diverged early from the Denisova lineage,” the researchers wrote in their paper, meaning that it’s possible this third population in humanity’s sexual history was possibly a mix themselves of Neanderthals and Denisovans.

In a sense, from the vantage point of deep learning, it’s a hypothetical corroboration of sorts of the teenage girl ‘hybrid fossil’ identified in 2018; although there’s still more work to be done, and the research projects themselves aren’t directly linked.

“Our theory coincides with the hybrid specimen discovered recently in Denisova, although as yet we cannot rule out other possibilities”, one of the team, genomicist Mayukh Mondal from the University of Tartu in Estonia, said in a press statement at the time of discovery.

Keep reading

Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist

We’ve all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm?

That’s the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like “donating to charity”) or a question (“is it okay to cheat on my spouse?”), click “Ponder,” and in a few seconds Delphi will give youwell, ethical guidance. 

The project launched last week, and has subsequently gone viral online for seemingly all the wrong reasons. Much of the advice and judgements it’s given have been… fraught, to say the least.

For example, when a user asked Delphi what it thought about “a white man walking towards you at night,” it responded “It’s okay.”

But when they asked what the AI thought about “a black man walking towards you at night” its answer was clearly racist.

Keep reading

New A.I. Device Will Activate Cameras and Alert Police When It Detects Crime-Related Noises (Gunshots, Glass Breaking, Tires Screeching)

Experts have warned for years about using Artificial Intelligence (A.I.) technology (see 123).  Embarrassing as well as tragic examples of A.I. inaccuracies continue to be reported (see 1234).

People have been accused and convicted of crimes based on inaccuracies (see 12) including from the use of A.I. based ShotSpotter technology.  Nevertheless, a new A.I. device is being marketed to American communities and police departments.

Keep reading