Quantum cameras could remake space-based intelligence

Can quantum physics enable better, cheaper, faster satellite photos? In a month or two, a startup will test a “quantum camera” aboard an orbital telescope. If it works, it could slash the cost of missile defenses and give smaller NATO allies and partners spy-satellite capabilities that were once exclusive to major powers.

Funded in part by NASA and DARPA, the Boston-based Diffraqtion is testing a radically different way to make images from photons.

You might think that the cameras on the world’s most expensive satellites are fundamentally different from what your grandfather used to take old movies. But whether using chemicals and paper or chargeable transistors on a circuit, the process of deriving images from the behavior of photons has changed little in more than a century. That is one reason why space-based image collection—especially at high resolution—is incredibly expensive.

It’s also why Johannes Galatsanos, Diffraqtion’s co-founder and CEO, uses the term “quantum camera” rather than “photography.”

“You basically have light coming through a lens; it hits a sensor, and then that sensor takes a JPEG, an image, and then you can view it… or you can run AI on top, right, and detect things,” Galatsanos said. “Whether in space with high-resolution digital cameras or old-fashioned pinhole cameras, that process hasn’t [changed].”

That traditional method limits what can effectively be photographed based on diffraction, the process by which light beams pass through an aperture. It’s also a reason why high-resolution imaging satellites, like the WorldView-3, are large and heavy: like a telescope, they are mostly glass lenses and empty space. This is a reason why launches cost an average of about $50 million per satellite, and why why only a few countries have access to high-resolution satellite imagery.

Quantum science opens the possibility of collecting images using sensors that don’t require the same dense, heavy components. One of Diffraqtion’s cameras is the size of a small suitcase, launchable for just half a million dollars..

That just might be the key to shooting down highly maneuverable hypersonic missiles, as envisioned by the White House’s Golden Dome effort. The method proposed by Diffraqtion might lower the cost of the imaging systems on space-based interceptors, or even reduce the number needed to do the job.

“You have more area coverage, you can look at more targets at the same time, and so on,” said Galatsanos.

The idea effectively reverses the process of deriving an image from photonic data. But in quantum science, the simple act of observing quantum behaviors changes them. That’s useful for things like quantum encryption because it means that the message changes—obviously so—when intercepted. But it is also what makes quantum “photography” impossible.

Keep reading

When Physicians Are Replaced with a Protocol

My experience in medicine allows me to distinguish between genuine innovation and subtle reclassification that fundamentally alters practice while appearing unchanged. Artificial intelligence has recently attracted considerable attention, including the widely circulated assertion that AI has been “legally authorized to practice medicine” in the United States. Interpreted literally, this claim is inaccurate. No medical board has licensed a machine. No algorithm has sworn an oath, accepted fiduciary duty, or assumed personal liability for patient harm. No robot physician is opening a clinic, billing insurers, or standing before a malpractice jury.

However, stopping at this observation overlooks the broader issue. Legal concepts of liability are currently being redefined, often without public awareness.

A significant transformation is underway, warranting more than either reflexive dismissal or uncritical technological enthusiasm. The current development is not the licensure of artificial intelligence as a physician, but rather the gradual erosion of medicine’s core boundary: the intrinsic link between clinical judgment and human accountability. Clinical judgment involves making informed decisions tailored to each patient’s unique needs and circumstances, requiring empathy, intuition, and a deep understanding of medical ethics.

Human accountability refers to the responsibility healthcare providers assume for these decisions and their outcomes. This erosion is not the result of dramatic legislation or public debate, but occurs quietly through pilot programs, regulatory reinterpretations, and language that intentionally obscures responsibility. Once this boundary dissolves, medicine is transformed in ways that are difficult to reverse.

The main concern isn’t whether AI can refill prescriptions or spot abnormal lab results. Medicine has long used tools, and healthcare providers generally welcome help that reduces administrative tasks or improves pattern recognition. The real issue is whether medical judgment—deciding on the right actions, patients, and risks—can be viewed as a computer-generated outcome separated from moral responsibility. Historically, efforts to disconnect judgment from accountability have often caused harm without taking ownership.

Recent developments clarify the origins of current confusion. In several states, limited pilot programs now allow AI-driven systems to assist with prescription renewals for stable chronic conditions under narrowly defined protocols. At the federal level, proposed legislation has considered whether artificial intelligence might qualify as a “practitioner” for specific statutory purposes, provided it is appropriately regulated. These initiatives are typically presented as pragmatic responses to physician shortages, access delays, and administrative inefficiencies. While none explicitly designates AI as a physician, collectively they normalize the more concerning premise that medical actions can occur without a clearly identifiable human decision-maker.

In practice, this distinction is fundamental. Medicine is defined not by the mechanical execution of tasks, but by the assignment of responsibility when outcomes are unfavorable. Writing a prescription is straightforward; accepting responsibility for its consequences—particularly when considering comorbidities, social context, patient values, or incomplete information—is far more complex. Throughout my career, this responsibility has continuously resided with a human who could be questioned, challenged, corrected, and held accountable. When Dr. Smith makes an error, the family knows whom to contact, ensuring a direct line to human accountability. No algorithm, regardless of sophistication, can fulfill this role.

The primary risk is not technological, but regulatory and philosophical. This transition represents a shift from virtue ethics to proceduralism. When lawmakers and institutions redefine medical decision-making as a function of systems rather than personal acts, the moral framework of medicine changes. Accountability becomes diffuse, harm is more difficult to attribute, and responsibility shifts from clinicians to processes, from judgment to protocol adherence. When errors inevitably occur, the prevailing explanation becomes that ‘the system followed established guidelines.’ Recognizing this transition clarifies the shift from individualized ethical decision-making to mechanized procedural compliance.

Keep reading

UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk’s Grok AI chatbot.

Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person’s consent, were not “harmless images” but “weapons of abuse”.

The BBC has approached X for comment. It previously said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

It comes hours after Ofcom announced it was launching an investigation into X over “deeply concerning reports” about Grok altering images of people.

If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

And if X does not comply, Ofcom can seek a court order to force internet service providers to block access to the site in the UK altogether.

In a statement, Kendall urged the regulator not to take “months and months” to conclude its investigation, and demanded it set out a timeline “as soon as possible”.

It is currently illegal to share deepfakes of adults in the UK, but legislation in the Data (Use and Access) Act which would make it a criminal offence to create or request them has not been enforced until now, despite passing in June 2025.

Last week, campaigners accused the government of dragging its heels on implementing that law.

“Today I can announce to the House that this offence will be brought into force this week,” Kendall told MPs.

In addition to the Data Act, Kendall said she would also make it a “priority offence” in the Online Safety Act.

“The content which has circulated on X is vile. It’s not just an affront to decent society, it is illegal,” she said.

“Let me be crystal clear – under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

“This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law.”

Keep reading

Malaysia, Indonesia Block Musk’s Grok Amid Explicit Content Scrutiny

Malaysia and Indonesia have announced a decision to block access to Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, amid scrutiny over the tool’s use to generate sexually explicit images, including ones that appear to depict minors.

The coordinated actions by the two Southeast Asian nations mark the strongest regulatory response yet against Grok, as governments across Europe and North America press X and xAI to explain how the system allowed the creation of sexualized and manipulated images.

In Indonesia, the Ministry of Communication and Digital Affairs said on Jan. 10 that it had temporarily suspended access to Grok to protect the public from “the risk posed by fake pornographic content” generated using artificial intelligence. The ministry said the decision was aimed at safeguarding women, children, and society at large.

“The Government considers the practice of non-consensual sexual deepfakes to be a serious violation of human rights, dignity, and the security of citizens in the digital space,” Communications and Digital Affairs Minister Meutya Hafid said in a statement.

The ministry also said it had requested representatives from X to clarify the negative impacts linked to Grok use.

Indonesia’s communications ministry said the temporary suspension was carried out under its existing regulatory authority. It cited a 2020 regulation governing private electronic system operators, which requires platforms to ensure their systems do not host or facilitate prohibited electronic content.

Keep reading

White House Amplifies Shocking Claims Of US Super Soldiers Deployed In Maduro Raid

White House Spokeswoman Karoline Leavitt amplified claims about American special forces super-soliders deployed advanced weaponry during the extraction phase of former Venezuelan leader Nicolás Maduro.

Leavitt reposted an alleged account from a Venezuelan security guard at Maduro’s compound describing what happened when Delta Force operators descended from helicopters in pitch-black conditions. This account was originally posted on X by California-based political activist Mike Netter, who is seeking to recall left-wing Governor Gavin Newsom.

“On the day of the operation, we didn’t hear anything coming. We were on guard, but suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions. We didn’t know how to react,” the security guard on Maduro’s compound said. This account was considered credible enough for Leavitt to repost.

Here’s the full account from the security guard that reads Venezuelan forces were unable to comprehend the modern battlefield, where drones, sonic weapons, and we’re sure insane helmet-mounted optics with AI, just made an unlevel playing field, in which the guard said, “Yes, but it was a massacre. We were hundreds, but we had no chance. They were shooting with such precision and speed… it seemed like each soldier was firing 300 rounds per minute. We couldn’t do anything.”

Full account:

This account from a Venezuelan security guard loyal to Nicolás Maduro is absolutely chilling—and it explains a lot about why the tone across Latin America suddenly changed.

Security Guard: On the day of the operation, we didn’t hear anything coming. We were on guard, but suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions. We didn’t know how to react.

Interviewer: So what happened next? How was the main attack?

Security Guard: After those drones appeared, some helicopters arrived, but there were very few. I think barely eight helicopters. From those helicopters, soldiers came down, but a very small number. Maybe twenty men. But those men were technologically very advanced. They didn’t look like anything we’ve fought against before.

Interviewer: And then the battle began?

Security Guard: Yes, but it was a massacre. We were hundreds, but we had no chance. They were shooting with such precision and speed… it seemed like each soldier was firing 300 rounds per minute. We couldn’t do anything.

Interviewer: And your own weapons? Didn’t they help?

Security Guard: No help at all. Because it wasn’t just the weapons. At one point, they launched something—I don’t know how to describe it… it was like a very intense sound wave. Suddenly I felt like my head was exploding from the inside. We all started bleeding from the nose. Some were vomiting blood. We fell to the ground, unable to move.

Interviewer: And your comrades? Did they manage to resist?

Security Guard: No, not at all. Those twenty men, without a single casualty, killed hundreds of us. We had no way to compete with their technology, with their weapons. I swear, I’ve never seen anything like it. We couldn’t even stand up after that sonic weapon or whatever it was.

Interviewer: So do you think the rest of the region should think twice before confronting the Americans?

Security Guard: Without a doubt. I’m sending a warning to anyone who thinks they can fight the United States. They have no idea what they’re capable of. After what I saw, I never want to be on the other side of that again. They’re not to be messed with.

Interviewer: And now that Trump has said Mexico is on the list, do you think the situation will change in Latin America?

Security Guard: Definitely. Everyone is already talking about this. No one wants to go through what we went through. Now everyone thinks twice. What happened here is going to change a lot of things, not just in Venezuela but throughout the region.

Keep reading

AI FLOP: Dutch Court Annules Couple’s Marriage Over ChatGPT Speech

It may have been artificial, but in this case, it was not intelligent.

We live in a world where the power of Artificial Intelligence is starting to impact many aspects of our lives – and not always for the best.

Case in point: a Dutch couple had their marriage annulled after the person officiating used a ChatGPT-generated speech.

Yes, you read it right.

The AI speech that was intended to be ‘playful’ but authorities decided that ‘it failed to meet legal requirements.’

This was announced in a court ruling published this week.

Reuters reported:

“The pair from the city of Zwolle, whose names were redacted from the January 5 decision under Dutch privacy rules, argued that they had intended to marry regardless of whether the right wording was used when they took their vows.

According to the decision, the person officiating their ceremony last April 19 asked whether they would ‘continue supporting each other, teasing each other and embracing each other, even when life gets difficult’.”

Keep reading

Musk: AI Will Make Cash Worthless, Work Optional, Retirement-saving Obsolete — and More

If “work ennobles man,” as the saying goes, are we headed for a very ignoble future? If “cash is king” today, what will reign tomorrow? If an abundance of the material can bury the spiritual, are we headed for an ever-more intensified secularism?

These questions could and should be asked with a prediction billionaire industrialist Elon Musk recently made.

Our not-too-distant future is one, he says, in which cash will be worthless and work merely an option. Why, Musk adds, there may not even be a reason to save for retirement. How come?

Artificial intelligence (AI) and robotics will in a decade or two, he states, deliver a world of mechanical slaves that will satisfy every human need and want. In fact, the only limit to the abundance might be energy constraints and raw materials’ finiteness.

The Ant and The Grasshopper — Mr. Hopper’s Time Has Come?

Reporting on the story earlier this week, The Daily Overview wrote:

Musk has moved beyond warning that AI will disrupt jobs and is now arguing that it will underwrite a new baseline of prosperity. As Tesla CEO, he has said that advanced systems will create a kind of universal high income that makes traditional saving less important, because machines will be able to produce almost everything people need with minimal human labor. In his view, the combination of AI and robotics [AI-Bot] will eliminate poverty by driving the cost of goods and services toward zero….

He has gone further, arguing that as AI systems scale, money itself will soon be useless in the way people currently understand it. In one account, the argument is framed explicitly as “According to Elon Musk, Money Will Soon Be Useless, Why Does He Predict the End of Poverty,” with Musk contending that AI and robotics will become the backbone of a utopian society where scarcity is engineered away and financial incentives lose their central role. That framing captures his claim that the same technologies that threaten existing jobs could, if managed correctly, also dismantle material deprivation….

This may sound fanciful to some. But the only real question is whether we’ll destroy ourselves, or whether AI will, before or soon after this technology’s full flowering. What’s for certain is that if we don’t, AI-Bot will eventually be able to perform every or virtually every job. Why, need a plumber? A dexterous AI android may be repairing your pipes.

Keep reading

Starmer’s Looking for an Excuse to Ban X

Keir Starmer has signaled he is prepared to back regulatory action that could ultimately result in X being blocked in the UK.

The Prime Minister of the United Kingdom has suggested, more or less, that because Elon Musk’s AI chatbot Grok has been generating images of women and minors in bikinis, he’ll support going as far as hitting the kill switch and blocking access to the entire platform.

“The situation is disgraceful and disgusting,” Starmer said on Greatest Hits Radio; the station best known for playing ABBA and now, apparently, for frontline authoritarian tech policy announcements.

“X has got to get a grip of this, and Ofcom has our full support to take action…I’ve asked for all options to be on the table.”

“All options,” for those who don’t speak fluent Whitehall euphemism, now apparently includes turning Britain’s digital infrastructure into a sort of beige North Korea, where a bunch of government bureaucrats, armed with nothing but Online Safety Act censorship law and the panic of a 90s tabloid, get to decide which speech the public is allowed to see.

Now, you might be wondering: Surely he’s bluffing? Oh no. According to Downing Street sources, they’re quite serious.

And they’ve even named the mechanism: the Online Safety Act; that cheery little piece of legislation that sounds like it’s going to help grandmothers avoid email scams, but actually gives Ofcom the power to block platforms, fine them into oblivion, or ban them entirely if they don’t comply with government censorship orders.

Killing X isn’t a new idea. You may remember Morgan McSweeney, Keir Starmer’s Chief of Staff, founded the Centre for Countering Digital Hate. In 2024, leaks revealed that the group was trying to “Kill Musk’s Twitter.”

Keep reading

The Venezuela Technocracy Connection

The US bombing of Venezuela and capture of Nicolás Maduro cannot be rationally explained as a drug enforcement operation, or even solely about recovering oil. The bigger picture is Technocracy.

In the early morning hours of January 3, 2026, the United States military launched military strikes on Venezuela and captured President Nicolás Maduro and his wife, Cilia Flores. Maduro and Flores have since been transported to the New York City to face charges relating to gun crimes and cocaine trafficking.

The move has divided the MAGA base—and the American public more generally—with a large portion of President Donald Trump’s base viewing it as a betrayal of the principles he claimed to champion. Specifically, Trump has claimed for years he would not start new wars of aggression.

While Trump has stated that taking out Maduro is not about launching new wars but instead a calculated attack to take out a man he blames for America’s fentanyl crisis, the facts tell another story.

Was Maduro’s Capture About Drug Trafficking?

In May 2025, the US Drug Enforcement Administration (DEA) released its 2025 National Drug Threat Assessment (NDTA). This report mentions Venezuela trafficking fentanyl to the US a total of zero times. Instead, it blames Mexican cartels for the manufacturing and trafficking of fentanyl. This should come as no surprise to anyone paying attention, as these facts are common knowledge among the US government and drug-trafficking researchers.

A second key point is that although Trump and neocon Secretary of State Marco Rubio have repeatedly sought to tie Maduro to drug cartels, there remains scant evidence for the claim.

The US government previously claimed Maduro was the head of the drug-trafficking group Cartel de los Soles (also known as the Cartel of the Suns). However, many skeptics have claimed the group doesn’t actually exist. During Trump’s first term, Maduro was indicted as the alleged leader of this cartel. In 2025, during his second term, Cartel de los Soles was officially designated a foreign terrorist organization.

However, when Maduro was brought to NYC and officially charged, the US Department of Justice dropped the allegations from their indictment. The lack of charges relating to Cartel de los Soles is a signal that the US government does not believe it has strong enough evidence to convict Maduro in court. Instead, they have changed their tune and are now claiming Maduro was involved in cocaine trafficking.

Keep reading

How A Techno-Optimist Became A Grave Skeptic

Before Covid, I would have described myself as a technological optimist. New technologies almost always arrive amid exaggerated fears. Railways were supposed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for everything from moral decay to physical collapse. Over time, these anxieties faded, societies adapted, and living standards rose. The pattern was familiar enough that artificial intelligence seemed likely to follow it: disruptive, sometimes misused, but ultimately manageable.

The Covid years unsettled that confidence—not because technology failed, but because institutions did.

Across much of the world, governments and expert bodies responded to uncertainty with unprecedented social and biomedical interventions, justified by worst-case models and enforced with remarkable certainty. Competing hypotheses were marginalized rather than debated. Emergency measures hardened into long-term policy. When evidence shifted, admissions of error were rare, and accountability rarer still. The experience exposed a deeper problem than any single policy mistake: modern institutions appear poorly equipped to manage uncertainty without overreach.

That lesson now weighs heavily on debates over artificial intelligence.

The AI Risk Divide

Broadly speaking, concern about advanced AI falls into two camps. One group—associated with thinkers like Eliezer Yudkowsky and Nate Soares—argues that sufficiently advanced AI is catastrophically dangerous by default. In their deliberately stark formulation, If Anyone Builds It, Everyone Dies, the problem is not bad intentions but incentives: competition ensures someone will cut corners, and once a system escapes meaningful control, intentions no longer matter.

A second camp, including figures such as Stuart Russell, Nick Bostrom, and Max Tegmark, also takes AI risk seriously but is more optimistic that alignment, careful governance, and gradual deployment can keep systems under human control.

Despite their differences, both camps converge on one conclusion: unconstrained AI development is dangerous, and some form of oversight, coordination, or restraint is necessary. Where they diverge is on feasibility and urgency. What is rarely examined, however, is whether the institutions expected to provide that restraint are themselves fit for the role.

Covid suggests reason for doubt.

Covid was not merely a public-health crisis; it was a live experiment in expert-driven governance under uncertainty. Faced with incomplete data, authorities repeatedly chose maximal interventions justified by speculative harms. Dissent was often treated as a moral failing rather than a scientific necessity. Policies were defended not through transparent cost-benefit analysis but through appeals to authority and fear of hypothetical futures.

This pattern matters because it reveals how modern institutions behave when stakes are framed as existential. Incentives shift toward decisiveness, narrative control, and moral certainty. Error correction becomes reputationally costly. Precaution stops being a tool and becomes a doctrine.

Keep reading