Malaysia, Indonesia Block Musk’s Grok Amid Explicit Content Scrutiny

Malaysia and Indonesia have announced a decision to block access to Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, amid scrutiny over the tool’s use to generate sexually explicit images, including ones that appear to depict minors.

The coordinated actions by the two Southeast Asian nations mark the strongest regulatory response yet against Grok, as governments across Europe and North America press X and xAI to explain how the system allowed the creation of sexualized and manipulated images.

In Indonesia, the Ministry of Communication and Digital Affairs said on Jan. 10 that it had temporarily suspended access to Grok to protect the public from “the risk posed by fake pornographic content” generated using artificial intelligence. The ministry said the decision was aimed at safeguarding women, children, and society at large.

“The Government considers the practice of non-consensual sexual deepfakes to be a serious violation of human rights, dignity, and the security of citizens in the digital space,” Communications and Digital Affairs Minister Meutya Hafid said in a statement.

The ministry also said it had requested representatives from X to clarify the negative impacts linked to Grok use.

Indonesia’s communications ministry said the temporary suspension was carried out under its existing regulatory authority. It cited a 2020 regulation governing private electronic system operators, which requires platforms to ensure their systems do not host or facilitate prohibited electronic content.

Keep reading

AI FLOP: Dutch Court Annules Couple’s Marriage Over ChatGPT Speech

It may have been artificial, but in this case, it was not intelligent.

We live in a world where the power of Artificial Intelligence is starting to impact many aspects of our lives – and not always for the best.

Case in point: a Dutch couple had their marriage annulled after the person officiating used a ChatGPT-generated speech.

Yes, you read it right.

The AI speech that was intended to be ‘playful’ but authorities decided that ‘it failed to meet legal requirements.’

This was announced in a court ruling published this week.

Reuters reported:

“The pair from the city of Zwolle, whose names were redacted from the January 5 decision under Dutch privacy rules, argued that they had intended to marry regardless of whether the right wording was used when they took their vows.

According to the decision, the person officiating their ceremony last April 19 asked whether they would ‘continue supporting each other, teasing each other and embracing each other, even when life gets difficult’.”

Keep reading

Musk: AI Will Make Cash Worthless, Work Optional, Retirement-saving Obsolete — and More

If “work ennobles man,” as the saying goes, are we headed for a very ignoble future? If “cash is king” today, what will reign tomorrow? If an abundance of the material can bury the spiritual, are we headed for an ever-more intensified secularism?

These questions could and should be asked with a prediction billionaire industrialist Elon Musk recently made.

Our not-too-distant future is one, he says, in which cash will be worthless and work merely an option. Why, Musk adds, there may not even be a reason to save for retirement. How come?

Artificial intelligence (AI) and robotics will in a decade or two, he states, deliver a world of mechanical slaves that will satisfy every human need and want. In fact, the only limit to the abundance might be energy constraints and raw materials’ finiteness.

The Ant and The Grasshopper — Mr. Hopper’s Time Has Come?

Reporting on the story earlier this week, The Daily Overview wrote:

Musk has moved beyond warning that AI will disrupt jobs and is now arguing that it will underwrite a new baseline of prosperity. As Tesla CEO, he has said that advanced systems will create a kind of universal high income that makes traditional saving less important, because machines will be able to produce almost everything people need with minimal human labor. In his view, the combination of AI and robotics [AI-Bot] will eliminate poverty by driving the cost of goods and services toward zero….

He has gone further, arguing that as AI systems scale, money itself will soon be useless in the way people currently understand it. In one account, the argument is framed explicitly as “According to Elon Musk, Money Will Soon Be Useless, Why Does He Predict the End of Poverty,” with Musk contending that AI and robotics will become the backbone of a utopian society where scarcity is engineered away and financial incentives lose their central role. That framing captures his claim that the same technologies that threaten existing jobs could, if managed correctly, also dismantle material deprivation….

This may sound fanciful to some. But the only real question is whether we’ll destroy ourselves, or whether AI will, before or soon after this technology’s full flowering. What’s for certain is that if we don’t, AI-Bot will eventually be able to perform every or virtually every job. Why, need a plumber? A dexterous AI android may be repairing your pipes.

Keep reading

Starmer’s Looking for an Excuse to Ban X

Keir Starmer has signaled he is prepared to back regulatory action that could ultimately result in X being blocked in the UK.

The Prime Minister of the United Kingdom has suggested, more or less, that because Elon Musk’s AI chatbot Grok has been generating images of women and minors in bikinis, he’ll support going as far as hitting the kill switch and blocking access to the entire platform.

“The situation is disgraceful and disgusting,” Starmer said on Greatest Hits Radio; the station best known for playing ABBA and now, apparently, for frontline authoritarian tech policy announcements.

“X has got to get a grip of this, and Ofcom has our full support to take action…I’ve asked for all options to be on the table.”

“All options,” for those who don’t speak fluent Whitehall euphemism, now apparently includes turning Britain’s digital infrastructure into a sort of beige North Korea, where a bunch of government bureaucrats, armed with nothing but Online Safety Act censorship law and the panic of a 90s tabloid, get to decide which speech the public is allowed to see.

Now, you might be wondering: Surely he’s bluffing? Oh no. According to Downing Street sources, they’re quite serious.

And they’ve even named the mechanism: the Online Safety Act; that cheery little piece of legislation that sounds like it’s going to help grandmothers avoid email scams, but actually gives Ofcom the power to block platforms, fine them into oblivion, or ban them entirely if they don’t comply with government censorship orders.

Killing X isn’t a new idea. You may remember Morgan McSweeney, Keir Starmer’s Chief of Staff, founded the Centre for Countering Digital Hate. In 2024, leaks revealed that the group was trying to “Kill Musk’s Twitter.”

Keep reading

How A Techno-Optimist Became A Grave Skeptic

Before Covid, I would have described myself as a technological optimist. New technologies almost always arrive amid exaggerated fears. Railways were supposed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for everything from moral decay to physical collapse. Over time, these anxieties faded, societies adapted, and living standards rose. The pattern was familiar enough that artificial intelligence seemed likely to follow it: disruptive, sometimes misused, but ultimately manageable.

The Covid years unsettled that confidence—not because technology failed, but because institutions did.

Across much of the world, governments and expert bodies responded to uncertainty with unprecedented social and biomedical interventions, justified by worst-case models and enforced with remarkable certainty. Competing hypotheses were marginalized rather than debated. Emergency measures hardened into long-term policy. When evidence shifted, admissions of error were rare, and accountability rarer still. The experience exposed a deeper problem than any single policy mistake: modern institutions appear poorly equipped to manage uncertainty without overreach.

That lesson now weighs heavily on debates over artificial intelligence.

The AI Risk Divide

Broadly speaking, concern about advanced AI falls into two camps. One group—associated with thinkers like Eliezer Yudkowsky and Nate Soares—argues that sufficiently advanced AI is catastrophically dangerous by default. In their deliberately stark formulation, If Anyone Builds It, Everyone Dies, the problem is not bad intentions but incentives: competition ensures someone will cut corners, and once a system escapes meaningful control, intentions no longer matter.

A second camp, including figures such as Stuart Russell, Nick Bostrom, and Max Tegmark, also takes AI risk seriously but is more optimistic that alignment, careful governance, and gradual deployment can keep systems under human control.

Despite their differences, both camps converge on one conclusion: unconstrained AI development is dangerous, and some form of oversight, coordination, or restraint is necessary. Where they diverge is on feasibility and urgency. What is rarely examined, however, is whether the institutions expected to provide that restraint are themselves fit for the role.

Covid suggests reason for doubt.

Covid was not merely a public-health crisis; it was a live experiment in expert-driven governance under uncertainty. Faced with incomplete data, authorities repeatedly chose maximal interventions justified by speculative harms. Dissent was often treated as a moral failing rather than a scientific necessity. Policies were defended not through transparent cost-benefit analysis but through appeals to authority and fear of hypothetical futures.

This pattern matters because it reveals how modern institutions behave when stakes are framed as existential. Incentives shift toward decisiveness, narrative control, and moral certainty. Error correction becomes reputationally costly. Precaution stops being a tool and becomes a doctrine.

Keep reading

EU says it is ‘seriously looking’ into Musk’s Grok AI over sexual deepfakes of minors

The European Commission said on Jan 5 it is “very seriously looking” into complaints that Mr Elon Musk’s AI tool Grok is being used to generate and disseminate sexually explicit child-like images.

“Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with child-like images. This is not spicy. This is illegal. This is appalling,” EU digital affairs spokesman Thomas Regnier told reporters.

He added: “This has no place in Europe.”

Complaints of abuse began hitting Mr Musk’s X social media platform, where Grok is available, after an “edit image” button for the generative artificial intelligence tool was rolled out in late December.

But Grok maker xAI, run by Mr Musk, said earlier in January it was scrambling to fix flaws in its AI tool.

The public prosecutor’s office in Paris has also expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography.

Keep reading

China’s Military AI and Biotechnology Directed at the United States

Soldiers in a brigade attached to the 83rd Group Army of the Chinese People’s Liberation Army conduct virtual reality exercises. Photo: Screenshot from China Central Television

According to a congressional report, the People’s Liberation Army continues to exploit military-civil fusion to integrate commercial and academic research into military systems. Military-Civil Fusion is China’s national strategy to merge civilian technology, research institutions, and industry with the defense sector in order to build a world-class military.

The strategy aligns commercial innovation with military requirements across fields ranging from artificial intelligence to semiconductors, pooling state and private resources to accelerate military development. Chinese authorities describe military-civil fusion as a core component of comprehensive national power and a central driver of long-term military modernization.

Through state laboratories, funding programs, conferences, and industrial parks, China has ensured sustained private-sector participation in this effort. As a result, it has made significant advances in artificial intelligence and large language models that underpin many emerging PLA technologies. AI reasoning systems support cyber operations, command decision-making, and influence campaigns, while also enabling autonomous and unmanned platforms, drone swarms, and loyal wingman UAVs.

These capabilities increasingly intersect with developments in quantum computing, quantum sensing, and quantum communications, which China has identified as priorities for national security and future warfare.

Chinese leader Xi Jinping has described quantum technologies as drivers of industrial transformation, and Beijing is investing in post-quantum cryptography, military applications of quantum sensing, and ground- and space-based infrastructure for a global quantum communications network with both civilian and military uses.

Quantum communications support nuclear command, control, and communications by enabling hardened and interception-resistant links, while quantum sensing has potential applications in anti-submarine warfare by enabling detection methods that do not rely on active sonar.

Semiconductor self-sufficiency remains a parallel strategic objective. In 2024, firms including Semiconductor Manufacturing International Corp and Huawei Technologies received substantial local government funding to accelerate chip indigenization. Although China continues to lag the West in the most advanced GPUs, it is pursuing alternative pathways through nontraditional microchip technologies, including photonic components developed by state research institutes.

Domestic chip production underpins military resilience by enabling continued weapons manufacturing under sanctions and securing supply chains for missiles, drones, and radar systems. Alternative chip architectures support AI processing and reduce reliance on advanced Western GPUs, sustaining production capacity during conflict.

Keep reading

Is The Gloom And Doom About An AI Dystopia Justified?

“Artificial intelligence does the work of many minds at once. Will human creativity flourish or fail?”

“Artificial intelligence will make us useless slaves. We must stop these abuses before they start!”

“AI will create killer robots! We’re doomed!”

I have heard various versions of the above concerns regarding the rise of robots, the growth of artificial intelligence, and the broader concerns about the moral and ethical dilemmas facing humanity as technological innovation advances—and then accelerates.

The gloom around AI is understandable but incorrect.

Technological innovation has always served as a winnowing process. Old jobs fall away, but new jobs take their place. Some career paths may disappear, but new opportunities take over.

No matter how sophisticated, artificial intelligence cannot replace human intelligence, wisdom, ingenuity, and entrepreneurship. AI can hone specialized skills for those who want to retain or maintain specific fields of craftsmanship, but craftsmen are not going away.

With special thanks to Canadian commentator JJ McCullough, AI makes it easier to create templates and ideas, but the quality and the taste of the pictures, objects, and ideas created are, on the surface, still cringeworthy. A machine cannot inspire, nor can it replicate the inspiration of the human spirit. Whatever stories, poems, or other forms of art that can come out of a ChatGPT prompt, the style and substance will never suffice or suffuse the human mind. Furthermore, the compact creations of Grok or Meta AI programs can’t reflect the inner tensions of man’s search for place or meaning in his world, including the scenes that he depicts. While AI can generate pictures or formulate ideas into pictures, it cannot create or enhance the contrasts, shades, and shadows that transform the ordinary into the extraordinary.

Of course, there is a growing number of artists and intellectuals ruggedly opposed to AI. They think that all creative works deserve assessment and praise based on the amount of effort put into the creation. Here’s the fundamental failure of the marketplace for leftists. Value is not determined by labor but by the interest or value of the consumer. No creation, no good, and no service has value in and of itself, but rather its value is based on what it produces. There you have the Austrian economics’ subjective theory of value.

Animators fear that AI will take their jobs away. AI might make it easier to produce films, but the fundamental characters and templates of individual actors, processions, and ideas will have to come from the people. The stunning beauty of Walt Disney’s “Snow White” still rivals the computer animation of DreamWorks.

Instead of limiting or bankrupting artists, AI will induce the general public to discern quality, and artists will strive to reflect that. The general public will have a greater appreciation for the abstract and avant-garde. That’s a win-win for artists.

We cannot predict how broad and commanding man’s ingenuity will be going forward. AI has provided a means for man to be more creative more quickly, but it cannot predict or anticipate the future wants and needs of the general public, either.

Man and his search for competence, recognition, and meaning will not disappear, but our lives will improve in the search for answers.

Another fear about rapid mechanization and advancement of AI was that there would be such widespread unemployment that governments and peoples would have to invest in universal basic incomes.

Industries that promoted the upkeep and well-being of horses fell into decline with the arrival of the automobile. Animal enthusiasts, performers, and general-interest equestrians still own horses, ride them, and enjoy their company. The horse-riding industry was limited, but it became more specialized.

But specialized careers employ fewer people. Where’s the comfort, then? Consider the moment when banks switched to automatic in the 1970s. There was widespread fear that automated teller machines, ATMs, would put thousands of people out of work. The opposite happened. Banks shifted their services to more customer-related features. With the increased savings, these financial firms opened up more branches, and they ended up having to hire more people! Free enterprise does entail creative destruction, but there always follows a creative proliferation!

Furthermore, it’s rather arrogant for labor leaders and liberal pundits to claim that “There will be no jobs left.” Human wants and needs are constantly changing. Steve Jobs created the portable phone with Internet before there was a thought, let alone a want, for the phone. Once he invented the nifty device, everyone had to have one. The innovations often create the need because of the facility and agility they provide to the consumer.

Even now, reports are listing the jobs that AI cannot replace. Human beings will always have employment opportunities.

Keep reading

The AI Arms Race Is Cracking Open The Nuclear Fuel Cycle

  • The abstract “cloud” of artificial intelligence possesses a massive, structural demand for 24/7 “baseload” power that is equivalent to adding Germany’s entire power grid by 2026, a need intermittent renewables cannot meet.
  • Decades of underinvestment have resulted in a widening uranium supply deficit, with mined uranium expected to meet less than 75% of future reactor needs and an incentive price of $135/lb required to restart mothballed mines.
  • Big Tech hyperscalers are privatizing energy security by locking in clean baseload nuclear power via long-term agreements, effectively making the public grid’s “service” secondary to the “compute-ready” requirements of major platforms.

We are seeing a violent collision between two worlds: the high-speed, iterative world of artificial intelligence and the slow, grinding, capital-intensive world of nuclear physics. 

Data from a survey of over 600 global investors reveals that 63% now view AI electricity demand as a “structural” shift in nuclear planning. This isn’t a temporary spike or a speculative bubble. It is the physical footprint of every Large Language Model (LLM) query finally showing up on the global balance sheet.

For years, the energy narrative was dominated by “efficiency.” We were told that better chips would offset higher usage. That era is over. Generative AI doesn’t just use data; it incinerates energy to create it.

Why the “Efficiency” Narrative Failed

The “Reverse-Polish” reality of AI is that the more efficient we make the chips, the more chips we deploy, and the more complex the models become. This is Jevons Paradox playing out in real-time across the data centers of Northern Virginia and Singapore.

When you look at the energy density required for an AI hyperscale center, you aren’t looking at a traditional office building. You are looking at a facility that pulls as much power as a mid-sized city, but does so with a 99.999% uptime requirement.

Traditional demand models simply didn’t account for a single industry deciding to double its power footprint in less than five years. S&P Global Energy recently highlighted that data center electricity consumption could hit 2,200 terawatt-hours (TWh). 

Intermittent renewables…the darlings of the corporate ESG report…cannot provide the 24/7 “baseload” these machines require…

The hyperscalers have realized that if they want to dominate AI, they need to secure physical atoms before the other guy does.

Keep reading

Artificial Intelligence In The Classroom Destroys Actual Intelligence In Students

Ialways assumed that before AI destroyed our humanity, we’d at least put up a fight. No, I’m not talking about AI gaining consciousness and physically enslaving us (though I’m certainly not ruling that out as a possibility) — but it is alarming how quick many are to accept AI usage not just in their daily lives, but in education.

As an educator, I’ve heard high school teachers and college professors alike defend teaching AI usage in the classroom: “The technology isn’t going away, so kids have to learn how to use it responsibly.” “AI can be a useful tool.” “Learning how to write the right prompts is a marketable skill.” They say we should not only allow but encourage students to use AI to brainstorm ideas, write outlines, and provide feedback on their work.

On the surface, these suggestions can seem benign. Our society is pushing the idea that AI usage is not only inevitable but good. “You’re a writer,” a silky tone on an advertisement for AI software sings, “even if you are the kind who relies on AI.” Okay, so that’s not the exact verbiage, but that’s the idea we’re being sold. We’re reassured that AI can simply be a legitimate “tool.” You are a writer even if you use an AI generator. You are an artist just by instructing prompts. You are a creator, although it’s the algorithms doing the creating.

If the goal is simply to produce outcomes, one could argue that AI usage should not just be tolerated but encouraged. But education shouldn’t be about producing outcomes – whether it be a sparkling essay or a gripping short story – but shaping souls. The purpose of writing isn’t to instruct a prompt or even to produce a quality paper. The purpose is to become a strong thinker and someone who enriches the lives of everyone, no matter their profession. 

Each and every step of the struggle it takes to write is essential. Yes, it can all be arduous and time-consuming. As a writer, I get how hard it is and how tempting it might be to take shortcuts. But doing so is cheating oneself out of growth and intellectual payoff. Outsourcing parts of the process to algorithms and machines is outsourcing the rewards of doing one’s own thinking. Organizing ideas, refining word choices, thinking about tone are all skills that many citizens in this nation lack, and it’s often apparent in our chaotic, senseless public discourse. These are not steps to be skipped over with a “tool,” but rather things people benefit from learning if they value reason. Strong writing is strong thinking.

But these thoughts aren’t just my own opinions. A recent MIT study shows that AI usage decreases cognitive function like critical thinking. Seems rather odd to insist that something proven to weaken our brains should be introduced to places where institutions of learning, isn’t it?

Many argue that in order to thrive in today’s job market, young people need to master the skill of “writing prompts.” The assumption is that it’s a great skill to learn how to tell a robot to do a job for you; a skill so great, in fact, that we need to send kids to school for it.  For decades, educators have argued kids need screen time to prepare them for today’s job market. They acted as if using the internet were a skill that needs years of training when in reality three-year-olds naturally become experts. Let us first focus on developing the minds of the youth — something best done without AI assistance — and then let them use those skills in the workplace as needed. Students should aspire to be more than mere “prompt writers,” but minds capable of thinking, reasoning, and perseverance.

Keep reading