The Great Grok Bikini Scandal is just Digital ID via the Backdoor.

wo days ago, the British government announced a U-turn on their proposed digital identity, and that the much-anticipated “BritCard” would no longer be mandatory to work in the UK.

This was welcomed as a victory by both fake anti-establishment types whose job is to Pied Piper genuine opposition, and some real resistance who should know better.

The reality is that reports of the death of digital identity have been greatly exaggerated. All they said was that it would no longer be mandatory.

Having a bank account, a cellphone, or an internet connection is not mandatory, but try functioning in this world without them.

As we said on X, anybody who understands governments or human nature knew any digital ID was likely never going to be gun-to-your-head, risking-prison-time mandatory.

All it has to be is a little bit faster and/or a little bit cheaper.

Saving you half an hour when submitting your tax return, faster progress through customs, lower “processing fees” for passport or driver’s license applications.

An hour of extra time and 50 pounds saved per year will do more coercion than barbed wire and billy clubs ever could.

Running alongside this is the manufactured drama around Grok’s generation of images of bikini-clad public figures, something which it suited the press and punditry class to work up into “sexual assault” and “pornography” whilst imploring us all to “think of the children!”

Inside a week, X has changed its policy, and Sir Keir Starmer’s government has promised a swift resolution of the issue using legislation that was (conveniently) passed last year but has yet to be enforced (more on that in the next few days).

This issue became a “problem”, had an hysterical “reaction” and was supplied a ready-made “solution” all inside two weeks. A swifter procession of the Hegelian dialectic would be hard to find.

So, we have the reported demise of mandatory digital identity occurring alongside the rise of the “threat” of AI “deepfakes”.

Nobody in the mainstream press has actually linked these stories together, but the connection is as obvious as the next step is inevitable.

This next step is the UK introducing its own version of the Australian “social media ban” for under-16s. In effect, age-gating all online interaction on major platforms and ending online anonymity.

Keep reading

Data Centers Use Lots of Electricity. This Bill Would Let Them Go Off the Grid.

Tech companies are building data centers as quickly as possible to run AI. These facilities are controviersial because they use copious amounts of electricity and might tax an electrical grid that in some areas is already straining.

In a bill introduced last week, Sen. Tom Cotton (R–Ark.) proposed an idea: letting these companies get off the grid altogether.

“Power officials have been raising concerns that the grid isn’t equipped to handle the sheer number of data centers tech companies are seeking to build,” Katherine Blunt wrote last week at The Wall Street Journal. “They say it will take many years to build new transmission lines and power plants needed to support the surge in demand while keeping the lights on for other customers.” Some officials, Blunt noted, “have proposed either requiring or encouraging data centers to stop using [the grid] when there is a risk of blackouts, either by powering down or switching to backup electricity supplies.”

Jowi Morales of Tom’s Hardware reports companies are “looking at alternative power sources to bring their projects online, regardless of the availability of power from the grid.” Microsoft, for example, is recommissioning the Three Mile Island nuclear plant in Pennsylvania to generate 835 megawatts of energy for its data centers (though not without a $1 billion loan from U.S. taxpayers).

“These initiatives will take years to take off, though,” Morales adds. “The Three Mile Island plant is expected to be operational only by 2028.”

Last week, Cotton introduced the Decentralized Access to Technology Alternatives (DATA) Act of 2026. Under the bill, “a consumer-regulated electric utility” would be “exempt from regulation” under federal law so long as it doesn’t connect to the overall electrical grid.

When one company contracts to sell electricity to another company, “that retail transaction presently would put you under the jurisdiction of a bunch of people” at the state and federal levels, says Travis Fisher, director of energy and environmental policy studies at the Cato Institute.

And that brings a cumbersome level of red tape. “The rapid pace of innovation means the AI revolution won’t wait for multi-year permitting fights, cost-of-service hearings held by regulators, or planning processes built for the analog era,” Fisher pointed out last year in an article co-written by Cato’s Jennifer Huddleston. “And yet those are the structures that still govern electricity in much of the country. Building a new transmission line in the US now takes about 10 years, while generation projects spend multiple years stuck in interconnection queues, with more than 2,600 gigawatts of capacity now in queues nationwide.”

The DATA Act would lower the level of regulatory intrusion for enclosed systems that don’t connect to the grid. “It just serves data centers that are probably going to be clustered around it without taking electricity supply off the market for Arkansas families and businesses,” Cotton told the Arkansas Democrat-Gazette.

Keep reading

AI Shows Symptoms of Anxiety, Trauma, PTSD – And It’s Ruining Your Mental Health Too

Grok, Gemini and ChatGPT exhibit symptoms of poor mental health according to a new study that put various AI models through weeks of therapy-style questioning. Some are now curious about “AI mental health”, but the real warning here is about how unstable these systems – which are already being used by one in three UK adults for mental health support – become in emotionally charged conversations. Millions of people are turning to AI as replacement therapists, and in the last year alone we’ve seen a spike in lawsuits connecting chatbot interactions with self-harm and suicide cases in vulnerable users.

The emerging picture is not that machines are suffering or mentally unwell, but that a product being used for mental-health support is fundamentally misleading, escalating, and reinforcing dangerous thoughts. 

Keep reading

Quantum cameras could remake space-based intelligence

Can quantum physics enable better, cheaper, faster satellite photos? In a month or two, a startup will test a “quantum camera” aboard an orbital telescope. If it works, it could slash the cost of missile defenses and give smaller NATO allies and partners spy-satellite capabilities that were once exclusive to major powers.

Funded in part by NASA and DARPA, the Boston-based Diffraqtion is testing a radically different way to make images from photons.

You might think that the cameras on the world’s most expensive satellites are fundamentally different from what your grandfather used to take old movies. But whether using chemicals and paper or chargeable transistors on a circuit, the process of deriving images from the behavior of photons has changed little in more than a century. That is one reason why space-based image collection—especially at high resolution—is incredibly expensive.

It’s also why Johannes Galatsanos, Diffraqtion’s co-founder and CEO, uses the term “quantum camera” rather than “photography.”

“You basically have light coming through a lens; it hits a sensor, and then that sensor takes a JPEG, an image, and then you can view it… or you can run AI on top, right, and detect things,” Galatsanos said. “Whether in space with high-resolution digital cameras or old-fashioned pinhole cameras, that process hasn’t [changed].”

That traditional method limits what can effectively be photographed based on diffraction, the process by which light beams pass through an aperture. It’s also a reason why high-resolution imaging satellites, like the WorldView-3, are large and heavy: like a telescope, they are mostly glass lenses and empty space. This is a reason why launches cost an average of about $50 million per satellite, and why why only a few countries have access to high-resolution satellite imagery.

Quantum science opens the possibility of collecting images using sensors that don’t require the same dense, heavy components. One of Diffraqtion’s cameras is the size of a small suitcase, launchable for just half a million dollars..

That just might be the key to shooting down highly maneuverable hypersonic missiles, as envisioned by the White House’s Golden Dome effort. The method proposed by Diffraqtion might lower the cost of the imaging systems on space-based interceptors, or even reduce the number needed to do the job.

“You have more area coverage, you can look at more targets at the same time, and so on,” said Galatsanos.

The idea effectively reverses the process of deriving an image from photonic data. But in quantum science, the simple act of observing quantum behaviors changes them. That’s useful for things like quantum encryption because it means that the message changes—obviously so—when intercepted. But it is also what makes quantum “photography” impossible.

Keep reading

When Physicians Are Replaced with a Protocol

My experience in medicine allows me to distinguish between genuine innovation and subtle reclassification that fundamentally alters practice while appearing unchanged. Artificial intelligence has recently attracted considerable attention, including the widely circulated assertion that AI has been “legally authorized to practice medicine” in the United States. Interpreted literally, this claim is inaccurate. No medical board has licensed a machine. No algorithm has sworn an oath, accepted fiduciary duty, or assumed personal liability for patient harm. No robot physician is opening a clinic, billing insurers, or standing before a malpractice jury.

However, stopping at this observation overlooks the broader issue. Legal concepts of liability are currently being redefined, often without public awareness.

A significant transformation is underway, warranting more than either reflexive dismissal or uncritical technological enthusiasm. The current development is not the licensure of artificial intelligence as a physician, but rather the gradual erosion of medicine’s core boundary: the intrinsic link between clinical judgment and human accountability. Clinical judgment involves making informed decisions tailored to each patient’s unique needs and circumstances, requiring empathy, intuition, and a deep understanding of medical ethics.

Human accountability refers to the responsibility healthcare providers assume for these decisions and their outcomes. This erosion is not the result of dramatic legislation or public debate, but occurs quietly through pilot programs, regulatory reinterpretations, and language that intentionally obscures responsibility. Once this boundary dissolves, medicine is transformed in ways that are difficult to reverse.

The main concern isn’t whether AI can refill prescriptions or spot abnormal lab results. Medicine has long used tools, and healthcare providers generally welcome help that reduces administrative tasks or improves pattern recognition. The real issue is whether medical judgment—deciding on the right actions, patients, and risks—can be viewed as a computer-generated outcome separated from moral responsibility. Historically, efforts to disconnect judgment from accountability have often caused harm without taking ownership.

Recent developments clarify the origins of current confusion. In several states, limited pilot programs now allow AI-driven systems to assist with prescription renewals for stable chronic conditions under narrowly defined protocols. At the federal level, proposed legislation has considered whether artificial intelligence might qualify as a “practitioner” for specific statutory purposes, provided it is appropriately regulated. These initiatives are typically presented as pragmatic responses to physician shortages, access delays, and administrative inefficiencies. While none explicitly designates AI as a physician, collectively they normalize the more concerning premise that medical actions can occur without a clearly identifiable human decision-maker.

In practice, this distinction is fundamental. Medicine is defined not by the mechanical execution of tasks, but by the assignment of responsibility when outcomes are unfavorable. Writing a prescription is straightforward; accepting responsibility for its consequences—particularly when considering comorbidities, social context, patient values, or incomplete information—is far more complex. Throughout my career, this responsibility has continuously resided with a human who could be questioned, challenged, corrected, and held accountable. When Dr. Smith makes an error, the family knows whom to contact, ensuring a direct line to human accountability. No algorithm, regardless of sophistication, can fulfill this role.

The primary risk is not technological, but regulatory and philosophical. This transition represents a shift from virtue ethics to proceduralism. When lawmakers and institutions redefine medical decision-making as a function of systems rather than personal acts, the moral framework of medicine changes. Accountability becomes diffuse, harm is more difficult to attribute, and responsibility shifts from clinicians to processes, from judgment to protocol adherence. When errors inevitably occur, the prevailing explanation becomes that ‘the system followed established guidelines.’ Recognizing this transition clarifies the shift from individualized ethical decision-making to mechanized procedural compliance.

Keep reading

UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk’s Grok AI chatbot.

Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person’s consent, were not “harmless images” but “weapons of abuse”.

The BBC has approached X for comment. It previously said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

It comes hours after Ofcom announced it was launching an investigation into X over “deeply concerning reports” about Grok altering images of people.

If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

And if X does not comply, Ofcom can seek a court order to force internet service providers to block access to the site in the UK altogether.

In a statement, Kendall urged the regulator not to take “months and months” to conclude its investigation, and demanded it set out a timeline “as soon as possible”.

It is currently illegal to share deepfakes of adults in the UK, but legislation in the Data (Use and Access) Act which would make it a criminal offence to create or request them has not been enforced until now, despite passing in June 2025.

Last week, campaigners accused the government of dragging its heels on implementing that law.

“Today I can announce to the House that this offence will be brought into force this week,” Kendall told MPs.

In addition to the Data Act, Kendall said she would also make it a “priority offence” in the Online Safety Act.

“The content which has circulated on X is vile. It’s not just an affront to decent society, it is illegal,” she said.

“Let me be crystal clear – under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

“This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law.”

Keep reading

Malaysia, Indonesia Block Musk’s Grok Amid Explicit Content Scrutiny

Malaysia and Indonesia have announced a decision to block access to Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, amid scrutiny over the tool’s use to generate sexually explicit images, including ones that appear to depict minors.

The coordinated actions by the two Southeast Asian nations mark the strongest regulatory response yet against Grok, as governments across Europe and North America press X and xAI to explain how the system allowed the creation of sexualized and manipulated images.

In Indonesia, the Ministry of Communication and Digital Affairs said on Jan. 10 that it had temporarily suspended access to Grok to protect the public from “the risk posed by fake pornographic content” generated using artificial intelligence. The ministry said the decision was aimed at safeguarding women, children, and society at large.

“The Government considers the practice of non-consensual sexual deepfakes to be a serious violation of human rights, dignity, and the security of citizens in the digital space,” Communications and Digital Affairs Minister Meutya Hafid said in a statement.

The ministry also said it had requested representatives from X to clarify the negative impacts linked to Grok use.

Indonesia’s communications ministry said the temporary suspension was carried out under its existing regulatory authority. It cited a 2020 regulation governing private electronic system operators, which requires platforms to ensure their systems do not host or facilitate prohibited electronic content.

Keep reading

White House Amplifies Shocking Claims Of US Super Soldiers Deployed In Maduro Raid

White House Spokeswoman Karoline Leavitt amplified claims about American special forces super-soliders deployed advanced weaponry during the extraction phase of former Venezuelan leader Nicolás Maduro.

Leavitt reposted an alleged account from a Venezuelan security guard at Maduro’s compound describing what happened when Delta Force operators descended from helicopters in pitch-black conditions. This account was originally posted on X by California-based political activist Mike Netter, who is seeking to recall left-wing Governor Gavin Newsom.

“On the day of the operation, we didn’t hear anything coming. We were on guard, but suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions. We didn’t know how to react,” the security guard on Maduro’s compound said. This account was considered credible enough for Leavitt to repost.

Here’s the full account from the security guard that reads Venezuelan forces were unable to comprehend the modern battlefield, where drones, sonic weapons, and we’re sure insane helmet-mounted optics with AI, just made an unlevel playing field, in which the guard said, “Yes, but it was a massacre. We were hundreds, but we had no chance. They were shooting with such precision and speed… it seemed like each soldier was firing 300 rounds per minute. We couldn’t do anything.”

Full account:

This account from a Venezuelan security guard loyal to Nicolás Maduro is absolutely chilling—and it explains a lot about why the tone across Latin America suddenly changed.

Security Guard: On the day of the operation, we didn’t hear anything coming. We were on guard, but suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions. We didn’t know how to react.

Interviewer: So what happened next? How was the main attack?

Security Guard: After those drones appeared, some helicopters arrived, but there were very few. I think barely eight helicopters. From those helicopters, soldiers came down, but a very small number. Maybe twenty men. But those men were technologically very advanced. They didn’t look like anything we’ve fought against before.

Interviewer: And then the battle began?

Security Guard: Yes, but it was a massacre. We were hundreds, but we had no chance. They were shooting with such precision and speed… it seemed like each soldier was firing 300 rounds per minute. We couldn’t do anything.

Interviewer: And your own weapons? Didn’t they help?

Security Guard: No help at all. Because it wasn’t just the weapons. At one point, they launched something—I don’t know how to describe it… it was like a very intense sound wave. Suddenly I felt like my head was exploding from the inside. We all started bleeding from the nose. Some were vomiting blood. We fell to the ground, unable to move.

Interviewer: And your comrades? Did they manage to resist?

Security Guard: No, not at all. Those twenty men, without a single casualty, killed hundreds of us. We had no way to compete with their technology, with their weapons. I swear, I’ve never seen anything like it. We couldn’t even stand up after that sonic weapon or whatever it was.

Interviewer: So do you think the rest of the region should think twice before confronting the Americans?

Security Guard: Without a doubt. I’m sending a warning to anyone who thinks they can fight the United States. They have no idea what they’re capable of. After what I saw, I never want to be on the other side of that again. They’re not to be messed with.

Interviewer: And now that Trump has said Mexico is on the list, do you think the situation will change in Latin America?

Security Guard: Definitely. Everyone is already talking about this. No one wants to go through what we went through. Now everyone thinks twice. What happened here is going to change a lot of things, not just in Venezuela but throughout the region.

Keep reading

AI FLOP: Dutch Court Annules Couple’s Marriage Over ChatGPT Speech

It may have been artificial, but in this case, it was not intelligent.

We live in a world where the power of Artificial Intelligence is starting to impact many aspects of our lives – and not always for the best.

Case in point: a Dutch couple had their marriage annulled after the person officiating used a ChatGPT-generated speech.

Yes, you read it right.

The AI speech that was intended to be ‘playful’ but authorities decided that ‘it failed to meet legal requirements.’

This was announced in a court ruling published this week.

Reuters reported:

“The pair from the city of Zwolle, whose names were redacted from the January 5 decision under Dutch privacy rules, argued that they had intended to marry regardless of whether the right wording was used when they took their vows.

According to the decision, the person officiating their ceremony last April 19 asked whether they would ‘continue supporting each other, teasing each other and embracing each other, even when life gets difficult’.”

Keep reading

Musk: AI Will Make Cash Worthless, Work Optional, Retirement-saving Obsolete — and More

If “work ennobles man,” as the saying goes, are we headed for a very ignoble future? If “cash is king” today, what will reign tomorrow? If an abundance of the material can bury the spiritual, are we headed for an ever-more intensified secularism?

These questions could and should be asked with a prediction billionaire industrialist Elon Musk recently made.

Our not-too-distant future is one, he says, in which cash will be worthless and work merely an option. Why, Musk adds, there may not even be a reason to save for retirement. How come?

Artificial intelligence (AI) and robotics will in a decade or two, he states, deliver a world of mechanical slaves that will satisfy every human need and want. In fact, the only limit to the abundance might be energy constraints and raw materials’ finiteness.

The Ant and The Grasshopper — Mr. Hopper’s Time Has Come?

Reporting on the story earlier this week, The Daily Overview wrote:

Musk has moved beyond warning that AI will disrupt jobs and is now arguing that it will underwrite a new baseline of prosperity. As Tesla CEO, he has said that advanced systems will create a kind of universal high income that makes traditional saving less important, because machines will be able to produce almost everything people need with minimal human labor. In his view, the combination of AI and robotics [AI-Bot] will eliminate poverty by driving the cost of goods and services toward zero….

He has gone further, arguing that as AI systems scale, money itself will soon be useless in the way people currently understand it. In one account, the argument is framed explicitly as “According to Elon Musk, Money Will Soon Be Useless, Why Does He Predict the End of Poverty,” with Musk contending that AI and robotics will become the backbone of a utopian society where scarcity is engineered away and financial incentives lose their central role. That framing captures his claim that the same technologies that threaten existing jobs could, if managed correctly, also dismantle material deprivation….

This may sound fanciful to some. But the only real question is whether we’ll destroy ourselves, or whether AI will, before or soon after this technology’s full flowering. What’s for certain is that if we don’t, AI-Bot will eventually be able to perform every or virtually every job. Why, need a plumber? A dexterous AI android may be repairing your pipes.

Keep reading