Scientists Say They’re Now Actively Trying to Build Conscious Robots

2022 was a banner year for artificial intelligence, and particularly taking into account the launch of OpenAI’s incredibly impressive ChatGPT, the industry is showing no sign of stopping.

But for some industry leaders, chatbots and image-generators are far from the final robotic frontier. Next up? Consciousness.

“This topic was taboo,” Hod Lipson, the mechanical engineer in charge of the Creative Machines Lab at Columbia University, told The New York Times. “We were almost forbidden from talking about it — ‘Don’t talk about the c-word; you won’t get tenure’ — so in the beginning I had to disguise it, like it was something else.”

Consciousness is one of the longest standing, and most divisive, questions in the field of artificial intelligence. And while to some it’s science fiction — and indeed has been the plot of countless sci-fi books, comics, and films — to others, like Lipson, it’s a goal, one that would undoubtedly change human life as we know it for good.

“This is not just another research question that we’re working on — this is the question,” the researcher continued. “This is bigger than curing cancer.”

“If we can create a machine that will have consciousness on par with a human, this will eclipse everything else we’ve done,” he added. “That machine itself can cure cancer.”

Keep reading

Biden Admin Funds AI To Police Online Language

Government spending records have revealed that the Biden Administration is dishing out more than half a million dollars in grants to fund the development of artificial intelligence that will censor language on social media in order to eliminate ‘microaggressions’.

The Washington Free Beacon reports that the funding was part of Biden’s $1.9 trillion ‘American Rescue Plan’ and was granted to researchers at the University of Washington in March to develop technologies that could be used to protect online users from ‘discriminatory’ language.

Judicial Watch president Tom Fitton compared the move to the Chinese Communist Party’s efforts to “censor speech unapproved by the state,” calling it a “project to make it easier for their leftist allies to censor speech.”

Keep reading

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

In the fall of 2020, gig workers in Venezuela posted a series of images to online forums where they gathered to talk shop. The photos were mundane, if sometimes intimate, household scenes captured from low angles—including some you really wouldn’t want shared on the Internet. 

In one particularly revealing shot, a young woman in a lavender T-shirt sits on the toilet, her shorts pulled down to mid-thigh.

The images were not taken by a person, but by development versions of iRobot’s Roomba J7 series robot vacuum. They were then sent to Scale AI, a startup that contracts workers around the world to label audio, photo, and video data used to train artificial intelligence. 

They were the sorts of scenes that internet-connected devices regularly capture and send back to the cloud—though usually with stricter storage and access controls. Yet earlier this year, MIT Technology Review obtained 15 screenshots of these private photos, which had been posted to closed social media groups. 

The photos vary in type and in sensitivity. The most intimate image we saw was the series of video stills featuring the young woman on the toilet, her face blocked in the lead image but unobscured in the grainy scroll of shots below. In another image, a boy who appears to be eight or nine years old, and whose face is clearly visible, is sprawled on his stomach across a hallway floor. A triangular flop of hair spills across his forehead as he stares, with apparent amusement, at the object recording him from just below eye level.

The other shots show rooms from homes around the world, some occupied by humans, one by a dog. Furniture, décor, and objects located high on the walls and ceilings are outlined by rectangular boxes and accompanied by labels like “tv,” “plant_or_flower,” and “ceiling light.” 

Keep reading

Zooming Our Way Into Oblivion

Look at all of the wonders that technology has brought us! I am certainly not going to start listing them here; it would take volumes to come up with even a partial list. We can bask in the marvels that technology has made for us in our modern world.

What would we do without this special form of human know-how?

That being said, there is a shadow to everything, and people are just as familiar with this darker side of technology as they are with the brighter side.

Needless to say, we have been inundated with the disasters of our insatiable desire to create conglomerations of various individual components that when properly animated with some sort of power source “do” something that we find useful, exciting, and entertaining—or deadly.

Most of this inundation comes from fanciful science fiction stories about killer robots and strange mechanical implants or, the most horrific addition to this plethora of “bots gone bad”—nanotechnology—tiny cell-sized, or even smaller, mechanical creatures that can penetrate the inner sanctums of our body and wreak a special sort of bedlam.

We have been slowly approaching an era where AI will become the primary way of life—we will live under the authority of a technocracy. Humankind will be just a whisper in some octogenarian’s late night dreams. Humankind will be gone.

Not so fast, in the words of alt-news hero James Corbett:

Here’s a great big white pill for you: the technocratic system of tyranny is going to fail. This is not wishful thinking; it’s a cold statement of fact. Technocracy, in all its facets—from the UN’s 2030 Agenda to the brain chips and AI godheads of the transhumanists to the CBDC social credit surveillance state—is anti-human. It goes against nature itself. It cannot work in the long run, and it is destined to fail.”

I wish I could be so optimistic. I cannot.

Keep reading

Fact Check-Photos allegedly from Admiral Byrd’s Antarctic expedition were generated by artificial intelligence

Social media users are sharing images which they claim show top secret photographs from Admiral Byrd’s Antarctic expedition where traces of a lost ancient civilization can be seen. However, the images were created by artificial intelligence and do not show historic photographs of the expedition.

An example can be seen (here).

The text in one post reads: “This is why no one is allowed to visit Antarctica and why every government in the world signed a treaty together to conspire and hide the truth from the mass population. Below you see Top Secret Lost Photos from Admiral Byrd’s Antarctic Expedition. Traces of a lost ancient advance civilization could be seen in the photographs.”

Comments on the post include: “Most interesting photos I’ve seen for a while….” and “There’s most certainly other reasons, would be nice knowing the entire truth of their discoveries.”

Some users point out that the faces of the individuals seen in the images are not visible and others point out that the images must have been created by artificial intelligence.

Admiral Richard E. Byrd was a U.S. naval officer, aviator and explorer who went on several Antarctic expeditions between 1928 and 1956 (here). Details about each expedition can be seen (www.admiralbyrd.com/).

There is no evidence that Byrd discovered a secret civilization in his expeditions as claimed in the posts. However, the expeditions have fueled conspiracy theories. One example is the “Hollow Earth” theory, which believes that the center of the Earth houses a secret civilization as discussed (here), (here).

Videos of Byrd’s Antarctic expeditions can be seen (here), (here), (here), (here) by Reuters and British Pathe. Photographs can be seen on Getty Images (here).

The images seen in the posts appear in a Medium article (here).

The article says: “Thanks to a source who wishes to remain nameless, we had the opportunity to view a large and compelling image collection of never before seen and highly top secret photos from Byrd’s many missions. They seem to depict concrete proof of an entire forgotten civilization — its architectures, artifacts, technologies, and much more — that once called Antarctica their home.”

A note at the end of the article reads: “Certain elements of these images may have been enhanced or generated by AI for quality purposes.”

Keep reading

Former Content Moderator Claims AIs Are Using Fake Conspiracy Theories To Silence Real Ones

It sounds like something out of the X-Files, but a former content moderator claims that AIs are generating conspiracy theories and flooding online platforms with fake images, videos, and text in order to manipulate human society, and to discredit real conspiracy theories.

The moderator-turned-whistleblower, who calls himself Scott Chatsalot — and whose real name was withheld on request in the interest of their safety— says that he personally witnessed AI-generated conspiracy content being published onto tech platforms at a “massive scale.” He went on to claim in an email to a major news network: “There are definitely no humans behind these campaigns, which are by orders of magnitude the largest anyone has ever seen, and which platforms are totally covering up. Only AIs have the power to do this…”

After posting his allegations to Twitter in mid-August, Chatsalot alleges that he was fired by his employer for doing so, and subsequently deleted his Twitter account. When contacted, Chatsalot refused to comment and his personal Facebook account appears to have been deleted or shadowbanned, though Chatsalot has since uploaded an apology video to YouTube and has taken to his Reddit account, which still appears active, to make the same allegations.

The author of the alleged email, which was not independently verified claimed that the company that he worked for, referred to by Chatsalot only as Widget, was hired by a large tech company as a content moderation team, whose work he describes as “very sensitive.”

On Reddit, Chatsalot alleged that in 2019 he was promoted to lead a large content moderation team of 2,500 employees tasked with policing posts from “all over the internet, including Reddit, Facebook, YouTube, Twitter, etc.”

According to Chatsalot, the team’s job was to police “all kinds of posts,” from “pornographic to political to religious to everything in between” and its work was “extremely secretive, and there was no oversight,” meaning there was “no one to tell you what was wrong.” He says they “just made it all up on the fly, and nobody else cared.”

Things changed in late 2021 when their moderation team began seeing a huge influx of AI generated media. Chatsalot claims that the content in question was all “generated by AIs and never human hands,” and that the images “are generated completely by artificial intelligence and machine learning trained off all the worst content from social media platforms, which Widget had unique access to.”

He says he saw the AI generated content via the massive Widget content moderation platform used by major social platforms, and that, “the scale at which this was being generated is insane. Billions per second insane. I’ve never seen anything like it.” He claimed that the platforms “don’t want to talk about it, because it makes them look bad.” He also claimed that whoever is generating these images, text, and videos, “the goal is to generate controversy to get people to click on your content and earn likes and money.” He says the AIs are using A/B testing to see what people respond to most negatively.

He also claimed that Widget itself was using AI to manipulate content across various platforms, and that, “It was using AI to target the conspiracy theories themselves for clicks and revenues.”

“Basically, the AI was teaching itself to know what was a conspiracy theory and what was not. That was its job.”

“The AI could generate a conspiracy theory from nothing and it would always seem real and people would look at it and like it,” he said, “It was actually producing this stuff, and it would use a real photo of something else and alter it, and then post it with a new face or caption that was totally different.” He goes on to claim that “The AI knew that it would make people mad and it would make people click on the image.”

Keep reading

Should Killer AI Robots Be Banned?

The Netherlands deployed its first lethal autonomous weapons last month, according to the military and intelligence trade journal Janes.

As Statista’s Anna Fleck reports, the move marks the first time that a NATO army has started operational trials with armed unmanned ground vehicles (UGVs), more commonly known as “killer robots” – a worrying shift in warfare from the West.

Four armed Tracked Hybrid Modular Infantry Systems (THeMIS) UGCs were reportedly deployed to Lithuania on September 12, where they are undergoing trials in a “military-relevant environment”, according to Janes.

Unlike drones, which require a human to instruct it where to move and how to act, these robotic tank-like weapons are designed to know how to pull the trigger themselves.

The UN has convened repeatedly to decide whether or not to ban killer robots, or merely to regulate them.

The grand majority of the world remains critical of lethal autonomous weapons systems in war, according to research carried out by Ipsos and the Campaign to Stop Killer Robots.

Keep reading

United States Government Has Plans of Creating an AI that Can Expose Anonymous Writers

According to a recent announcement by the Office of the Director of National Intelligence (ODNI), the Intelligence Advanced Projects Activity (IARPA) is developing a program to unmask anonymous writers. IARPA will use AI to analyze anonymous writers’ style. According to Cindy Harper of Reclaim the Net, a writer’s style “is seen as potentially being as unique as a fingerprint.” 

“Humans and machines produce vast amounts of text content every day. Text contains linguistic features that can reveal author identity,” IARPA stated.

If IARPA succeeds with its venture, it believes that the Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program could identify a writer’s style from multiple samples and change those patterns to increase the anonymization of the writing. 

“We have a strong chance of meeting our goals, delivering much-needed capabilities to the Intelligence Community, and substantially expanding our understanding of variation in human language using the latest advances in computational linguistics and deep learning,” declared HIATUS program manager Dr. Timothy McKinnon.

On top of that, IARPA said it will create explainability standards for the program’s AIs.

ODNI revealed that HIATUS could have several applications, which includes fighting foreign influence activities, defending writers whose work may potentially endanger them, and identifying counterintelligence risks. Per McKinnon, the program can identify if a machine generated or a human being wrote the text.

However, Harper noted that “it is not IARPA’s work to turn HIATUS into something usable. The agency’s work is only to develop the technology.” Regardless, it’s becoming clear that the ruling class has it in for anonymous writers and those who use pen names. 

Keep reading

US government plans to develop AI that can unmask anonymous writers

The Office of the Director of National Intelligence (ODNI) said that the Intelligence Advanced Projects Activity (IARPA) is working on a program to unmask anonymous writers by using AI to analyze their writing style which is seen as potentially being as unique as a fingerprint.

“Humans and machines produce vast amounts of text content every day. Text contains linguistic features that can reveal author identity,” IARPA said.

If successful, IARPA believes the Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program could identify a writer’s style from different samples and modify those patterns to further anonymize the writing.

“We have a strong chance of meeting our goals, delivering much-needed capabilities to the Intelligence Community, and substantially expanding our understanding of variation in human language using the latest advances in computational linguistics and deep learning,” said HIATUS program manager Dr. Timothy McKinnon.

IARPA said that it will also develop explainability standards for the program’s AIs.

Keep reading