AI = Slavery

I wrote in Part 1 about the human conditioning test for acceptance of artificial intelligence running our Government and Business World. The test was quite simple: How much “Self-Service” are people willing to take before we say “Enough is Enough!”.

We failed miserably. What’s worse, there seems no end to how much crap we can take. At least for as long as the food and EBT cards holds out.

Why is that important?

Its because self-service is what we will all be doing, endlessly, in an AI driven internet only world where human interaction is transactionally eliminated in all but social relationships.

Few bother to fight, especially when the alternative of resistance requires a certain amount of inconvenience, persistence and even courage.

After all, people still have Amazon and Bank of America accounts and take their offspring to Disneyland in between Church visits. Easy peasy.

We not only accept our self service Internet of Things world but do so gladly and even without asking for a discount that the generation ahead of us would and did require. Consequently, just like our Covid Lockdown test, we met a necessary standard proving we will do nearly anything we are told. Don’t believe me? How soon we forget.

How about people with MDs telling 80 olds to wear ill fitting dust masks in 90 deg heat? What about those same Doctors injecting infants with a near zero risk of dying from Covid with still experimental mRNA treatments after reports of severe injury was publicly known and easy to find (if not from your own patients)? Or, that we allowed churches to close for the first time in American history?

Only an illegal monopolistic and powerful brainwashing legacy media could make all that happen. But, it did happen because we lost our ability to distinguish truth from propaganda. Somehow, it seemed comfortable. And safe.

That means full steam ahead towards our future nightmare in a 1984 Brave New World. A world made possible in our lifetimes by Artificial Intelligence.

Keep reading

Did AI Almost Start World War III?

Recall that the Covid fiasco went into overdrive when Neil Ferguson of Imperial College London generated a wildly incorrect estimate of the fatality rate of the virus from China. He had two forecasts, one without lockdowns (death everywhere) and one with (not terrible). The idea was to inspire the replication of the CCP’s extreme methods of people control in the West. 

That model, first shared in classified realms, flipped the narrative. Once select advisors – Deborah Birx and Anthony Fauci among them – presented it to Trump, he went from opposing lockdowns to getting in front of the seemingly inevitable. 

Before long, every Gates-funded NGO was pushing more such models that proved the point. Masses of people observed the models as if they were an accurate reflection of reality. Major media reported on them daily. 

As the fiasco dragged on, so did data fakery. The PCR tests were generating false positives, giving the impression of an unfolding calamity even though medically significant infections were highly limited. Infections and even exposures were redefined as cases, for the first time in epidemiological history. Then came the subsidized “deaths from Covid” that clearly generated waves of misclassification that underscore the overestimation of the fatality rate.

Keep reading

China Reportedly On Verge Of 100 DeepSeek-Like Breakthroughs Amid Aspirations For World Domination

China is preparing to launch a tsunami of domestic AI innovation, with more than 100 DeepSeek-like breakthroughs (more here) expected within the next 18 months, according to former PBOC Deputy Governor Zhu Min, as reported by Bloomberg. This development signals Beijing’s intent to rapidly close the technological gap ahead of the 2030s. 

Speaking at the World Economic Forum’s “Annual Meeting of the New Champions” in Tianjin, China, Min told the audience that 100 DeepSeek-like breakthroughs “will fundamentally change the nature and the tech nature of the whole Chinese economy.”

The emergence of DeepSeek, a low-cost, powerful AI model, has fueled Chinese tech stocks and underscored China’s AI competitiveness despite U.S. restrictions on advanced chips and domestic macroeconomic headwinds. Bloomberg Economics projects high-tech’s contribution to China’s GDP could rise from 15% in 2024 to over 18% by 2026.

Traders are rotating into Chinese equities, with the Hang Seng Index surging 25% year-to-date, significantly outperforming the S&P 500, which is up just 3.3% and effectively flat in real terms. China stocks outperformed soon after DeepSeek’s launch in January. 

Keep reading

China shuts down AI tools during nationwide college exams

Chinese AI companies have temporarily paused some of their chatbot features to prevent students from using them to cheat during nationwide college exams, Bloomberg reports. Popular AI apps, including Alibaba’s Qwen and ByteDance’s Doubao, have stopped picture recognition features from responding to questions about test papers, while Tencent’s Yuanbao, Moonshot’s Kimi have suspended photo-recognition services entirely during exam hours.

The increasing availability of chatbots has made it easier than ever for students around the world to cheat their way through education. Schools in the US are trying to address the issue by reintroducing paper tests, with the Wall Street Journal reporting in May that sales of blue books have boomed in universities across the country over the last two years.

The rigorous multi-day “gaokao” exams are sat by more than 13.3 million Chinese students between June 7-10th, each fighting to secure one of the limited spots at universities across the country. Students are already banned from using devices like phones and laptops during the hours-long tests, so the disabling of AI chatbots serves as an additional safety net to prevent cheating during exam season.

When asked to explain the suspension, Bloomberg reports the Yuanbao and Kimi chatbots responded that functions had been disabled “to ensure the fairness of the college entrance examinations.” Similarly, the DeepSeek AI tool that went viral earlier this year is also blocking its service during specific hours “to ensure fairness in the college entrance examination,” according to The Guardian.

Keep reading

Florida Police: Christian School Teacher May Have Used Student Images to Create AI Child Porn

A sixth-grade teacher in Central Florida was arrested this week on a host of charges for possessing child pornography, apparently created with online AI technology and possibly using student photos from his Christian school.

State Attorney General James Uthmeier’s office charged David McKeown of Holly Hill with 19 enhanced felony counts of possession of child sexual abuse material and six counts of possession of animal pornography, according to a statement released by the office.

McKeown was arrested Friday by the Holly Hill Police Department at his home in Volusia County. He was a sixth-grade teacher at United Brethren in Christ (UBIC) Academy, a school affiliated with the UBIC church.

Holly Hill Police Department’s investigation alleges that McKeown shared and downloaded pornographic images depicting child porn via Discord, an online chat service, while at school and connected to the school’s Wi-Fi network.

Some 30 images were allegedly shared, including six files depicting McKeown sexually abusing animals, the Florida Department of Law Enforcement (FDLA) reported.

Uthmeier said in the statement:

As a teacher, parents trusted Mr. McKeown to impart knowledge to their children. Instead, he spent parts of the school day sending and receiving child sex abuse material and providing other pedophiles with UBIC Academy students’ personal information. What he did is beyond betrayal — it’s devastating and sick.

The investigation was launched early this month after receiving a tip from the National Center for Missing and Exploited Children, which tracks the internet for exploitative content involving minors, Orlando’s Fox 35 reported.

The news outlet also reported authorities believe McKeown used AI technology to create the pornographic images and may have used photos of real children, perhaps his own students. The investigation is continuing.

Detectives seized a number of devices from the teacher’s home in Holly Hill and from the school. He was booked into the Volusia County jail and a judge denied him the possibility of bond.

If convicted, he faces up to 315 years in prison, officials said.

Keep reading

Palantir Denies Claims It Is Building Master Database

Palantir Technologies is roundly denying claims it’s building a massive, unified database containing Americans’ personal information, following media coverage implying its work for various federal agencies could enable unprecedented surveillance.

On May 30, the New York Times published an article highlighting the potential impact of the more than $900 million worth of federal contracts awarded to the Denver-based technology company since the beginning of the Trump administration.

“We are not building, we have not been asked to build, and we’re not in contract to build any kind of federal master list or master database across different agencies,” Courtney Bowman, the company’s global director of privacy and civil liberties, told The Epoch Times, “Each of those contracts are separate and fulfill specific mandates that are scoped and bound by congressional authorities and other laws.”

In March, President Donald Trump signed an executive order designed to limit wasteful spending by “eliminating information silos” among federal agencies. The order mandates that federal agencies must share data with each other. Furthermore, it requires the federal government to have unrestricted access to data from state programs receiving federal funding.

In the days following the report, various media outlets published reports that interpreted Palantir’s work as tantamount to developing a “’master database‘ or ’central intelligence layer’ drawing on Interal Revenue Service, Social Security, immigration and other records,” the Digital Trade & Data Governance Hub at George Washington University said in June.

“Collecting and linking such a vast array of sensitive records could create an unprecedented surveillance infrastructure. … There is a heightened risk of sensitive data being repurposed for uses beyond its original intent, or being used for political purposes,” a team led by Michael Moreno, a research associate at the Hub said.

Keep reading

The AI Slop Fight Between Iran and Israel

As Israel and Iran trade blows in a quickly escalating conflict that risks engulfing the rest of the region as well as a more direct confrontation between Iran and the U.S., social media is being flooded with AI-generated media that claims to show the devastation, but is fake.

The fake videos and images show how generative AI has already become a staple of modern conflict. On one end, AI-generated content of unknown origin is filling the void created by state-sanctioned media blackouts with misinformation, and on the other end, the leaders of these countries are sharing AI-generated slop to spread the oldest forms of xenophobia and propaganda.

If you want to follow a war as it’s happening, it’s easier than ever. Telegram channels post live streams of bombing raids as they happen and much of the footage trickles up to X, TikTok, and other social media platforms. There’s more footage of conflict than there’s ever been, but a lot of it is fake.

A few days ago, Iranian news outlets reported that Iran’s military had shot down three F-35s. Israel denied it happened. As the claim spread so did supposed images of the downed jet. In one, a massive version of the jet smolders on the ground next to a town. The cockpit dwarfs the nearby buildings and tiny people mill around the downed jet like Lilliputians surrounding Gulliver.

It’s a fake, an obvious one, but thousands of people shared it online. Another image of the supposedly downed jet showed it crashed in a field somewhere in the middle of the night. Its wings were gone and its afterburner still glowed hot. This was also a fake.

Keep reading

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

Keep reading

ChatGPT: How Technocracy Thrives On Global Chaos

This is an unedited, un-redacted analysis from OpenAI, answering my question: “With military and economic chaos in the world, what kinds of options are on the table for Technocracy and Technocrats to push ahead with their agenda?” It appears that ChatGPT tells more than 99 percent of the news analysts in the world. But this machine thinking has no bias against talking about Technocracy, and has mined data from the Internet to support its conclusions.

Imagine if Lester Holt or Anderson Cooper said something like this: “Historically, Technocracy thrives not in times of democratic stability, but in moments of systemic failure—when populations and institutions are desperate for “scientific” or “data-driven” solutions to chaos.”

One thing is for sure: Technocrats thrive during periods of crisis and chaos – that would be now. War in Europe. War in the Mideast. Threats of war in Asia. Riots everywhere, including in America. Technopopulism is offered as the solution, as populists everywhere think that Technocracy is the solution while drinking the Kool-Aide.

Keep reading

Teachers Must Avert an AI-Facilitated Intellectual Dark Age

Iremember watching a YouTube interview with a highly intelligent and observant entrepreneur, who cheerfully predicted that the time would come when AI programmes would replace teachers, rendering their jobs obsolete. The commentator in question was an enthusiastic advocate of personal and economic freedom and a vocal critic of the excessive incursions of State agencies in our personal lives. Yet for some reason, he seemed relatively unconcerned at the prospect of machines teaching our children.

Of course, there are tasks that most would happily relegate to AI programmes to the benefit of humanity, such as certain forms of tedious clerical work, a large chunk of manual labour, and the synthesis of unwieldy amounts of data. However, there are other tasks that cannot be delegated to a machine without endangering invaluable dimensions of our lives as human beings.

One of those tasks is teaching and learning, through which people learn to think, interpret the world, make rational arguments, assess evidence, make rational and holistic choices, and reflect on the meaning of their lives. For better or for worse, teachers, from kindergarten right up to university level, form the minds of the next generation. The formation of the mind relies on apprenticeship, imitation of a worthy model, and intellectual practice and training. 

Much as an athlete fine-tunes his motor skills and muscle memory playing sport, and finds inspiration in an exemplary athlete, the student fine-tunes his mental skills thinking, reflecting, studying, analysing, and generating ideas and arguments, in dialogue with an inspiring teacher. There is both an interpersonal and “hands-on” dimension to human learning, both of which are indispensable. 

Yet Artificial Intelligence is reaching the point where it has the capacity to automate and mechanise certain aspects of teaching and learning, marginalising crucial aspects of the learning process, most notably the way a teacher can model intellectual activity for the student, and the intellectual tasks a teacher assigns to students in order to fine-tune their mental skills and imagination. Many tasks which, just a few years ago, had to be undertaken “manually,” by which I mean, through the laborious activity, imagination, and effort of a human being, can now be performed automatically by AI.

Keep reading