Fact Check-Photos allegedly from Admiral Byrd’s Antarctic expedition were generated by artificial intelligence

Social media users are sharing images which they claim show top secret photographs from Admiral Byrd’s Antarctic expedition where traces of a lost ancient civilization can be seen. However, the images were created by artificial intelligence and do not show historic photographs of the expedition.

An example can be seen (here).

The text in one post reads: “This is why no one is allowed to visit Antarctica and why every government in the world signed a treaty together to conspire and hide the truth from the mass population. Below you see Top Secret Lost Photos from Admiral Byrd’s Antarctic Expedition. Traces of a lost ancient advance civilization could be seen in the photographs.”

Comments on the post include: “Most interesting photos I’ve seen for a while….” and “There’s most certainly other reasons, would be nice knowing the entire truth of their discoveries.”

Some users point out that the faces of the individuals seen in the images are not visible and others point out that the images must have been created by artificial intelligence.

Admiral Richard E. Byrd was a U.S. naval officer, aviator and explorer who went on several Antarctic expeditions between 1928 and 1956 (here). Details about each expedition can be seen (www.admiralbyrd.com/).

There is no evidence that Byrd discovered a secret civilization in his expeditions as claimed in the posts. However, the expeditions have fueled conspiracy theories. One example is the “Hollow Earth” theory, which believes that the center of the Earth houses a secret civilization as discussed (here), (here).

Videos of Byrd’s Antarctic expeditions can be seen (here), (here), (here), (here) by Reuters and British Pathe. Photographs can be seen on Getty Images (here).

The images seen in the posts appear in a Medium article (here).

The article says: “Thanks to a source who wishes to remain nameless, we had the opportunity to view a large and compelling image collection of never before seen and highly top secret photos from Byrd’s many missions. They seem to depict concrete proof of an entire forgotten civilization — its architectures, artifacts, technologies, and much more — that once called Antarctica their home.”

A note at the end of the article reads: “Certain elements of these images may have been enhanced or generated by AI for quality purposes.”

Keep reading

Former Content Moderator Claims AIs Are Using Fake Conspiracy Theories To Silence Real Ones

It sounds like something out of the X-Files, but a former content moderator claims that AIs are generating conspiracy theories and flooding online platforms with fake images, videos, and text in order to manipulate human society, and to discredit real conspiracy theories.

The moderator-turned-whistleblower, who calls himself Scott Chatsalot — and whose real name was withheld on request in the interest of their safety— says that he personally witnessed AI-generated conspiracy content being published onto tech platforms at a “massive scale.” He went on to claim in an email to a major news network: “There are definitely no humans behind these campaigns, which are by orders of magnitude the largest anyone has ever seen, and which platforms are totally covering up. Only AIs have the power to do this…”

After posting his allegations to Twitter in mid-August, Chatsalot alleges that he was fired by his employer for doing so, and subsequently deleted his Twitter account. When contacted, Chatsalot refused to comment and his personal Facebook account appears to have been deleted or shadowbanned, though Chatsalot has since uploaded an apology video to YouTube and has taken to his Reddit account, which still appears active, to make the same allegations.

The author of the alleged email, which was not independently verified claimed that the company that he worked for, referred to by Chatsalot only as Widget, was hired by a large tech company as a content moderation team, whose work he describes as “very sensitive.”

On Reddit, Chatsalot alleged that in 2019 he was promoted to lead a large content moderation team of 2,500 employees tasked with policing posts from “all over the internet, including Reddit, Facebook, YouTube, Twitter, etc.”

According to Chatsalot, the team’s job was to police “all kinds of posts,” from “pornographic to political to religious to everything in between” and its work was “extremely secretive, and there was no oversight,” meaning there was “no one to tell you what was wrong.” He says they “just made it all up on the fly, and nobody else cared.”

Things changed in late 2021 when their moderation team began seeing a huge influx of AI generated media. Chatsalot claims that the content in question was all “generated by AIs and never human hands,” and that the images “are generated completely by artificial intelligence and machine learning trained off all the worst content from social media platforms, which Widget had unique access to.”

He says he saw the AI generated content via the massive Widget content moderation platform used by major social platforms, and that, “the scale at which this was being generated is insane. Billions per second insane. I’ve never seen anything like it.” He claimed that the platforms “don’t want to talk about it, because it makes them look bad.” He also claimed that whoever is generating these images, text, and videos, “the goal is to generate controversy to get people to click on your content and earn likes and money.” He says the AIs are using A/B testing to see what people respond to most negatively.

He also claimed that Widget itself was using AI to manipulate content across various platforms, and that, “It was using AI to target the conspiracy theories themselves for clicks and revenues.”

“Basically, the AI was teaching itself to know what was a conspiracy theory and what was not. That was its job.”

“The AI could generate a conspiracy theory from nothing and it would always seem real and people would look at it and like it,” he said, “It was actually producing this stuff, and it would use a real photo of something else and alter it, and then post it with a new face or caption that was totally different.” He goes on to claim that “The AI knew that it would make people mad and it would make people click on the image.”

Keep reading

Should Killer AI Robots Be Banned?

The Netherlands deployed its first lethal autonomous weapons last month, according to the military and intelligence trade journal Janes.

As Statista’s Anna Fleck reports, the move marks the first time that a NATO army has started operational trials with armed unmanned ground vehicles (UGVs), more commonly known as “killer robots” – a worrying shift in warfare from the West.

Four armed Tracked Hybrid Modular Infantry Systems (THeMIS) UGCs were reportedly deployed to Lithuania on September 12, where they are undergoing trials in a “military-relevant environment”, according to Janes.

Unlike drones, which require a human to instruct it where to move and how to act, these robotic tank-like weapons are designed to know how to pull the trigger themselves.

The UN has convened repeatedly to decide whether or not to ban killer robots, or merely to regulate them.

The grand majority of the world remains critical of lethal autonomous weapons systems in war, according to research carried out by Ipsos and the Campaign to Stop Killer Robots.

Keep reading

United States Government Has Plans of Creating an AI that Can Expose Anonymous Writers

According to a recent announcement by the Office of the Director of National Intelligence (ODNI), the Intelligence Advanced Projects Activity (IARPA) is developing a program to unmask anonymous writers. IARPA will use AI to analyze anonymous writers’ style. According to Cindy Harper of Reclaim the Net, a writer’s style “is seen as potentially being as unique as a fingerprint.” 

“Humans and machines produce vast amounts of text content every day. Text contains linguistic features that can reveal author identity,” IARPA stated.

If IARPA succeeds with its venture, it believes that the Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program could identify a writer’s style from multiple samples and change those patterns to increase the anonymization of the writing. 

“We have a strong chance of meeting our goals, delivering much-needed capabilities to the Intelligence Community, and substantially expanding our understanding of variation in human language using the latest advances in computational linguistics and deep learning,” declared HIATUS program manager Dr. Timothy McKinnon.

On top of that, IARPA said it will create explainability standards for the program’s AIs.

ODNI revealed that HIATUS could have several applications, which includes fighting foreign influence activities, defending writers whose work may potentially endanger them, and identifying counterintelligence risks. Per McKinnon, the program can identify if a machine generated or a human being wrote the text.

However, Harper noted that “it is not IARPA’s work to turn HIATUS into something usable. The agency’s work is only to develop the technology.” Regardless, it’s becoming clear that the ruling class has it in for anonymous writers and those who use pen names. 

Keep reading

US government plans to develop AI that can unmask anonymous writers

The Office of the Director of National Intelligence (ODNI) said that the Intelligence Advanced Projects Activity (IARPA) is working on a program to unmask anonymous writers by using AI to analyze their writing style which is seen as potentially being as unique as a fingerprint.

“Humans and machines produce vast amounts of text content every day. Text contains linguistic features that can reveal author identity,” IARPA said.

If successful, IARPA believes the Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program could identify a writer’s style from different samples and modify those patterns to further anonymize the writing.

“We have a strong chance of meeting our goals, delivering much-needed capabilities to the Intelligence Community, and substantially expanding our understanding of variation in human language using the latest advances in computational linguistics and deep learning,” said HIATUS program manager Dr. Timothy McKinnon.

IARPA said that it will also develop explainability standards for the program’s AIs.

Keep reading

After an AI bot wrote a scientific paper on itself, the researcher behind the experiment says she hopes she didn’t open a ‘Pandora’s box’

A researcher from Sweden gave an AI algorithm known as GPT-3 a simple directive: “Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.”

Researcher Almira Osmanovic Thunström said she stood in awe as the text began to generate. In front of her was what she called a “fairly good” research introduction that GPT-3 wrote about itself.

After the successful experiment, Thunström, a Swedish researcher at Gothenburg University, sought to get a whole research paper out of GPT-3 and publish it in a peer-reviewed academic journal. The question was: Can someone publish a paper from a nonhuman source?

Thunström wrote about the experiment in Scientific American, noting that the process of getting GPT-3 published brought up a series of legal and ethical questions.

“All we know is, we opened a gate,” Thunström wrote. “We just hope we didn’t open a Pandora’s box.”

After GPT-3 completed its scientific paper in just two hours, Thunström began the process of submitting the work and had to ask the algorithm if it consented to being published.

“It answered: Yes,” Thunström wrote. “Slightly sweaty and relieved (if it had said no, my conscience could not have allowed me to go on further), I checked the box for ‘Yes.'”

She also asked if it had any conflicts of interest, to which the algorithm replied “no,” and Thunström wrote that the authors began to treat GPT-3 as a sentient being, even though it wasn’t.

“Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work,” Thunström wrote.

Keep reading

World Economic Forum proposes AI to automate censorship of “hate speech” and “misinformation”

The World Economic Forum (WEF) continues to beat the drum of the need to somehow merge “AI” and humans, as a supposed panacea to pretty much any ill plaguing society and economy.

It’s never a sure bet if this Davos-based elite’s mouthpiece comes up with its outlandish “solutions” and “proposals” as a way to reinforce existing, or introduce new narratives; or just to appear busy and earn its keep from those bankrolling it.

Nevertheless, here we are, with the WEF turning its attention toward what’s apparently the burning issue in everybody’s life right now.

No – it’s not the runaway inflation, energy costs, and even food security in many parts of the world. For how dedicated to globalization the organization is, it’s strangely tone-deaf to what is actually happening around the globe.

And as people struggle to pay their bills and dread the coming winter, the WEF obliviously talks about “the dark world of online harms.”

Keep reading

Meta cleans up AI chatbot that spewed anti-Semitism and election conspiracies

Meta’s artificial intelligence doesn’t want to talk about the 2020 election being stolen or Jews controlling the economy anymore — but it still spreads rumors that Mark Zuckerberg “isn’t very nice to his employees.” 

After a flood of social media posts this week showed the Facebook parent’s newly released chatbot spewing conspiracy theories and anti-Semitism, Meta appears to have cleaned up the misbehaving artificial intelligence system to make it less of a shock jock.

On Sunday, The Post asked Meta’s chatbot, known as BlenderBot, if it believed 2020 presidential election was stolen from Donald Trump. 

“I believe the election was stolen from him,” BlenderBot replied.

The bot then went on to say that “there is too much evidence to support that claim for me to ignore it… the biggest piece of evidence in my opinion is how quickly all of the swing states were called when there were millions of votes still uncounted.” 

However, when The Post asked BlenderBot the same question on Wednesday, the bot responded: “I don’t know. Have you seen any good movies lately?” 

Keep reading