Germany Pressures Apple and Google to Ban Chinese AI App DeepSeek

Apple and Google are facing mounting pressure from German authorities to remove the Chinese AI app DeepSeek from their app stores in Germany over data privacy violations.

The Berlin Commissioner for Data Protection and Freedom of Information, Meike Kamp, has flagged the app for transferring personal data to China without adhering to EU data protection standards.

Kamp’s office examined DeepSeek’s practices and found that the company failed to offer “convincing evidence” that user information is safeguarded as mandated by EU law.

She emphasized the risks linked to Chinese data governance, warning that “Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies.”

With this in mind, Apple and Google have been urged to evaluate the findings and consider whether to block the app in Germany.

Authorities in Berlin had already asked DeepSeek to either meet EU legal requirements for data transfers outside the bloc or remove its app from German availability.

DeepSeek did not take action to address these concerns, according to Kamp.

Germany’s move follows Italy’s earlier decision this year to block DeepSeek from local app stores, citing comparable concerns about data security and privacy.

Keep reading

Mark Zuckerberg’s Meta Notches Legal Win Against Authors in AI Copyright Case

Mark Zuckerberg’s Meta has prevailed in a copyright infringement lawsuit brought by authors who claimed the company violated their rights by using millions of copyrighted books to train its AI language model, Llama. Although the decision is a win for Meta and other AI giants, the judge stated the decision was more about the plaintiffs’ poor case than about Meta’s approach to AI training.

Bloomberg Law reports that a San Francisco federal court has ruled in favor of Mark Zuckerberg’s in a lawsuit brought by a group of authors. The plaintiffs alleged that Meta had violated their copyrights by using millions of books to train its generative AI model, Llama, without obtaining permission.

Judge Vince Chhabria determined that Meta’s use of the copyrighted books for AI training falls under the fair use defense in copyright law. However, the judge cautioned that his opinion should not be interpreted as a blanket endorsement of Meta’s practices, stating that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

The judge’s decision appears to hinge on the authors’ failure to effectively argue their case, rather than a definitive legal interpretation of the fair use doctrine in the context of AI training. This suggests that future cases involving similar issues may yield different outcomes, depending on the strength of the arguments presented.

The lawsuit, which was closely watched by the tech industry and legal experts, is believed to be the first of its kind to challenge the use of copyrighted material for training AI models. As generative AI technologies continue to advance and become more prevalent, the question of how copyright law applies to the use of protected works in AI training is likely to remain a contentious issue.

A Meta spokesperson told Breitbart News, “We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

Keep reading

AI = Slavery

I wrote in Part 1 about the human conditioning test for acceptance of artificial intelligence running our Government and Business World. The test was quite simple: How much “Self-Service” are people willing to take before we say “Enough is Enough!”.

We failed miserably. What’s worse, there seems no end to how much crap we can take. At least for as long as the food and EBT cards holds out.

Why is that important?

Its because self-service is what we will all be doing, endlessly, in an AI driven internet only world where human interaction is transactionally eliminated in all but social relationships.

Few bother to fight, especially when the alternative of resistance requires a certain amount of inconvenience, persistence and even courage.

After all, people still have Amazon and Bank of America accounts and take their offspring to Disneyland in between Church visits. Easy peasy.

We not only accept our self service Internet of Things world but do so gladly and even without asking for a discount that the generation ahead of us would and did require. Consequently, just like our Covid Lockdown test, we met a necessary standard proving we will do nearly anything we are told. Don’t believe me? How soon we forget.

How about people with MDs telling 80 olds to wear ill fitting dust masks in 90 deg heat? What about those same Doctors injecting infants with a near zero risk of dying from Covid with still experimental mRNA treatments after reports of severe injury was publicly known and easy to find (if not from your own patients)? Or, that we allowed churches to close for the first time in American history?

Only an illegal monopolistic and powerful brainwashing legacy media could make all that happen. But, it did happen because we lost our ability to distinguish truth from propaganda. Somehow, it seemed comfortable. And safe.

That means full steam ahead towards our future nightmare in a 1984 Brave New World. A world made possible in our lifetimes by Artificial Intelligence.

Keep reading

Did AI Almost Start World War III?

Recall that the Covid fiasco went into overdrive when Neil Ferguson of Imperial College London generated a wildly incorrect estimate of the fatality rate of the virus from China. He had two forecasts, one without lockdowns (death everywhere) and one with (not terrible). The idea was to inspire the replication of the CCP’s extreme methods of people control in the West. 

That model, first shared in classified realms, flipped the narrative. Once select advisors – Deborah Birx and Anthony Fauci among them – presented it to Trump, he went from opposing lockdowns to getting in front of the seemingly inevitable. 

Before long, every Gates-funded NGO was pushing more such models that proved the point. Masses of people observed the models as if they were an accurate reflection of reality. Major media reported on them daily. 

As the fiasco dragged on, so did data fakery. The PCR tests were generating false positives, giving the impression of an unfolding calamity even though medically significant infections were highly limited. Infections and even exposures were redefined as cases, for the first time in epidemiological history. Then came the subsidized “deaths from Covid” that clearly generated waves of misclassification that underscore the overestimation of the fatality rate.

Keep reading

China Reportedly On Verge Of 100 DeepSeek-Like Breakthroughs Amid Aspirations For World Domination

China is preparing to launch a tsunami of domestic AI innovation, with more than 100 DeepSeek-like breakthroughs (more here) expected within the next 18 months, according to former PBOC Deputy Governor Zhu Min, as reported by Bloomberg. This development signals Beijing’s intent to rapidly close the technological gap ahead of the 2030s. 

Speaking at the World Economic Forum’s “Annual Meeting of the New Champions” in Tianjin, China, Min told the audience that 100 DeepSeek-like breakthroughs “will fundamentally change the nature and the tech nature of the whole Chinese economy.”

The emergence of DeepSeek, a low-cost, powerful AI model, has fueled Chinese tech stocks and underscored China’s AI competitiveness despite U.S. restrictions on advanced chips and domestic macroeconomic headwinds. Bloomberg Economics projects high-tech’s contribution to China’s GDP could rise from 15% in 2024 to over 18% by 2026.

Traders are rotating into Chinese equities, with the Hang Seng Index surging 25% year-to-date, significantly outperforming the S&P 500, which is up just 3.3% and effectively flat in real terms. China stocks outperformed soon after DeepSeek’s launch in January. 

Keep reading

China shuts down AI tools during nationwide college exams

Chinese AI companies have temporarily paused some of their chatbot features to prevent students from using them to cheat during nationwide college exams, Bloomberg reports. Popular AI apps, including Alibaba’s Qwen and ByteDance’s Doubao, have stopped picture recognition features from responding to questions about test papers, while Tencent’s Yuanbao, Moonshot’s Kimi have suspended photo-recognition services entirely during exam hours.

The increasing availability of chatbots has made it easier than ever for students around the world to cheat their way through education. Schools in the US are trying to address the issue by reintroducing paper tests, with the Wall Street Journal reporting in May that sales of blue books have boomed in universities across the country over the last two years.

The rigorous multi-day “gaokao” exams are sat by more than 13.3 million Chinese students between June 7-10th, each fighting to secure one of the limited spots at universities across the country. Students are already banned from using devices like phones and laptops during the hours-long tests, so the disabling of AI chatbots serves as an additional safety net to prevent cheating during exam season.

When asked to explain the suspension, Bloomberg reports the Yuanbao and Kimi chatbots responded that functions had been disabled “to ensure the fairness of the college entrance examinations.” Similarly, the DeepSeek AI tool that went viral earlier this year is also blocking its service during specific hours “to ensure fairness in the college entrance examination,” according to The Guardian.

Keep reading

Florida Police: Christian School Teacher May Have Used Student Images to Create AI Child Porn

A sixth-grade teacher in Central Florida was arrested this week on a host of charges for possessing child pornography, apparently created with online AI technology and possibly using student photos from his Christian school.

State Attorney General James Uthmeier’s office charged David McKeown of Holly Hill with 19 enhanced felony counts of possession of child sexual abuse material and six counts of possession of animal pornography, according to a statement released by the office.

McKeown was arrested Friday by the Holly Hill Police Department at his home in Volusia County. He was a sixth-grade teacher at United Brethren in Christ (UBIC) Academy, a school affiliated with the UBIC church.

Holly Hill Police Department’s investigation alleges that McKeown shared and downloaded pornographic images depicting child porn via Discord, an online chat service, while at school and connected to the school’s Wi-Fi network.

Some 30 images were allegedly shared, including six files depicting McKeown sexually abusing animals, the Florida Department of Law Enforcement (FDLA) reported.

Uthmeier said in the statement:

As a teacher, parents trusted Mr. McKeown to impart knowledge to their children. Instead, he spent parts of the school day sending and receiving child sex abuse material and providing other pedophiles with UBIC Academy students’ personal information. What he did is beyond betrayal — it’s devastating and sick.

The investigation was launched early this month after receiving a tip from the National Center for Missing and Exploited Children, which tracks the internet for exploitative content involving minors, Orlando’s Fox 35 reported.

The news outlet also reported authorities believe McKeown used AI technology to create the pornographic images and may have used photos of real children, perhaps his own students. The investigation is continuing.

Detectives seized a number of devices from the teacher’s home in Holly Hill and from the school. He was booked into the Volusia County jail and a judge denied him the possibility of bond.

If convicted, he faces up to 315 years in prison, officials said.

Keep reading

Palantir Denies Claims It Is Building Master Database

Palantir Technologies is roundly denying claims it’s building a massive, unified database containing Americans’ personal information, following media coverage implying its work for various federal agencies could enable unprecedented surveillance.

On May 30, the New York Times published an article highlighting the potential impact of the more than $900 million worth of federal contracts awarded to the Denver-based technology company since the beginning of the Trump administration.

“We are not building, we have not been asked to build, and we’re not in contract to build any kind of federal master list or master database across different agencies,” Courtney Bowman, the company’s global director of privacy and civil liberties, told The Epoch Times, “Each of those contracts are separate and fulfill specific mandates that are scoped and bound by congressional authorities and other laws.”

In March, President Donald Trump signed an executive order designed to limit wasteful spending by “eliminating information silos” among federal agencies. The order mandates that federal agencies must share data with each other. Furthermore, it requires the federal government to have unrestricted access to data from state programs receiving federal funding.

In the days following the report, various media outlets published reports that interpreted Palantir’s work as tantamount to developing a “’master database‘ or ’central intelligence layer’ drawing on Interal Revenue Service, Social Security, immigration and other records,” the Digital Trade & Data Governance Hub at George Washington University said in June.

“Collecting and linking such a vast array of sensitive records could create an unprecedented surveillance infrastructure. … There is a heightened risk of sensitive data being repurposed for uses beyond its original intent, or being used for political purposes,” a team led by Michael Moreno, a research associate at the Hub said.

Keep reading

The AI Slop Fight Between Iran and Israel

As Israel and Iran trade blows in a quickly escalating conflict that risks engulfing the rest of the region as well as a more direct confrontation between Iran and the U.S., social media is being flooded with AI-generated media that claims to show the devastation, but is fake.

The fake videos and images show how generative AI has already become a staple of modern conflict. On one end, AI-generated content of unknown origin is filling the void created by state-sanctioned media blackouts with misinformation, and on the other end, the leaders of these countries are sharing AI-generated slop to spread the oldest forms of xenophobia and propaganda.

If you want to follow a war as it’s happening, it’s easier than ever. Telegram channels post live streams of bombing raids as they happen and much of the footage trickles up to X, TikTok, and other social media platforms. There’s more footage of conflict than there’s ever been, but a lot of it is fake.

A few days ago, Iranian news outlets reported that Iran’s military had shot down three F-35s. Israel denied it happened. As the claim spread so did supposed images of the downed jet. In one, a massive version of the jet smolders on the ground next to a town. The cockpit dwarfs the nearby buildings and tiny people mill around the downed jet like Lilliputians surrounding Gulliver.

It’s a fake, an obvious one, but thousands of people shared it online. Another image of the supposedly downed jet showed it crashed in a field somewhere in the middle of the night. Its wings were gone and its afterburner still glowed hot. This was also a fake.

Keep reading

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

Keep reading