Citizen Lab Director Warns Cyber Industry About US Authoritarian Descent

Ron Deibert, the director of Citizen Lab, one of the most prominent organizations investigating government spyware abuses, is sounding the alarm to the cybersecurity community and asking them to step up and join the fight against authoritarianism. 

On Wednesday, Deibert will deliver a keynote at the Black Hat cybersecurity conference in Las Vegas, one of the largest gatherings of information security professionals of the year. 

Ahead of his talk, Deibert told TechCrunch that he plans to speak about what he describes as a “descent into a kind of fusion of tech and fascism,” and the role that the Big Tech platforms are playing, and “propelling forward a really frightening type of collective insecurity that isn’t typically addressed by this crowd, this community, as a cybersecurity problem.”

Deibert described the recent political events in the United States as a “dramatic descent into authoritarianism,” but one that the cybersecurity community can help defend against.

“I think alarm bells need to be rung for this community that, at the very least, they should be aware of what’s going on and hopefully they can not contribute to it, if not help reverse it,” Deibert told TechCrunch.

Historically, at least in the United States, the cybersecurity industry has put politics — to a certain extent — to the side. More recently, however, politics has fully entered the world of cybersecurity. 

Earlier this year, President Donald Trump ordered an investigation into former CISA director Chris Krebs, who had publicly rebuffed Trump’s false claims about election fraud by declaring the 2020 election secure. Trump later fired Krebs by tweet. The investigation ordered by Trump months after his 2024 reelection forced Krebs to step down from SentinelOne and vow to fight back.

In response, Jen Easterly, another former CISA director and Krebs’ successor, called on the cybersecurity community to get involved and speak out.

“If we stay silent when experienced, mission-driven leaders are sidelined or sanctioned, we risk something greater than discomfort; we risk diminishing the very institutions we are here to protect,” Easterly wrote in a post on LinkedIn. 

Easterly was herself a victim of political pressure from the Trump administration when her offer to join West Point was rescinded in late July.

Keep reading

Breakthrough: Israel to Lead World’s First Human Spinal Cord Implant Using Patient’s Own Cells

Tel Aviv University researchers are preparing for the world’s first spinal cord implant in humans using engineered tissue grown from the patient’s own cells, marking a breakthrough that could restore walking ability to paralyzed patients within the coming year.

The groundbreaking procedure, developed at Tel Aviv University’s Sagol Center for Regenerative Biotechnology, uses a fully personalized approach that transforms a patient’s blood and fat cells into functional spinal cord tissue. Professor Tal Dvir, head of the research team, explained that “more than 80% of the animals regained full walking ability” in preclinical trials using the engineered implants.

The innovative process begins by reprogramming blood cells from patients through genetic engineering to behave like embryonic stem cells capable of becoming any type of cell in the body. Meanwhile, fat tissue from the same patient is used to extract substances such as collagen and sugars to produce a unique hydrogel that serves as the foundation for the implant.

“We take the cells that we’ve reprogrammed into embryonic-like stem cells, place them inside the gel, and mimic the embryonic development of the spinal cord,” Professor Dvir said. The result is a complete three-dimensional spinal cord implant that contains neuronal networks capable of transmitting electrical signals.

Keep reading

We Need To Rethink AI Before It Destroys What It Means To Be Human

America was built on the foundational belief that every man is created in the image of God with purpose, responsibility, and the liberty to chart his own course. We were not made to be managed. We were not made to be obsolete. But that is exactly the future Big Tech is building under the banner of Artificial Intelligence (AI). And if we do not slam the brakes right now, we are going to find ourselves in a world where the human experience is not enhanced by technology but erased by it.

Even Elon Musk, who is arguably one of AI’s most influential innovators, has warned us about the path we are on. In a sit-down with Israeli Prime Minister Benjamin Netanyahu, he laid out the endgame.

AI will lead us to either a future like the Terminator or what he described as Heaven on Earth.

But here is the kicker. That so-called heaven looks a lot like Pixar’s Wall-E, where human beings become obese, lazy blobs who float around while robots do all the work, all the thinking, and frankly all the living.

This may seem like science fiction, but this is what they are actually building.

At last year’s We, Robot event, Musk unveiled Tesla’s new self-driving robotaxi. But what caught my attention was their preview of Optimus, the AI-powered humanoid robot. In their promotional video, Tesla showed Optimus babysitting children, teaching in schools, and even serving as a doctor. Combine that with Tesla’s fully automated Hollywood diner concept, where Optimus is flipping burgers and even working as a waiter and bartender, and you begin to see the real aim. Automation is replacing human connection, service, and care.

So where do humans fit in? That is the terrifying part. Musk and Bill Gates have both pitched the idea of universal basic income to replace traditional employment that AI is going to replace. Musk has said there will come a point where no job is needed. You can have a job if you want one for personal satisfaction, but AI will do everything. Gates has proposed taxing robot labor to fund people who no longer work.

The reality is that work is more than a paycheck. It is not just how we survive; it is how we find purpose. It is how we grow, how we learn, and how we take responsibility. Struggle is not a flaw in the system; it is part of what makes us human. The daily grind, the failures, the perseverance, the sense of accomplishment. Strip all of that away, and you have stripped away humanity.

Keep reading

Legal Experts: ChatGPT and AI Models Should Face Medical Review for Human Testing, Weigh Serious Mental Health Risks to Users

When studies are done on human beings, they are required to have an “Institutional Review Board” or “IRB” review the study, and formally approve the research, this is not being done at present for federally-funded work with AI/LLM programs and may, experts warn, be significantly harming U.S. citizens.

This is done because studies are being conducted on human beings.
Critics say that ‘Large Language Models’ powered by Artificial Intelligence, platforms like “Claude” and “ChatGPT” are engaged in this kind of human research and should be subject to board review and approval.

And they point out that current HHS policies would appear to require IRB-review for all federally-funded research on human subjects, but that Big Tech companies have so far evaded such review.

IRB Rules (45 C.F.R. 46.109, “The Common Rule”), requires all federally funded human-subjects research to go through IRB approval, informed consent, and continuing oversight.

Some courts have recognized that failure to obtain IRB approval can be used as evidence in itself of negligence or misconduct.

Even low-impact and otherwise innocent research requires this kind of professional review to ensure that harmful effects are not inadvertently caused to the human participants. Most modern surveys are often required to have an IRB review prior to its start.

Already, scientists have raised alarm about the mental and psychological impact of LLM use among the population.

One legal expert who is investigating the potential for a class action against these Big Tech giants on this issue told the Gateway Pundit, “under these rules, if you read them closely, at a minimum, HHS should be terminating every single federal contract at a university that works on Artificial Intelligence.”

This issue came up in 2014, when Facebook was discovered to have been changing and manipulating their algorithms on 700,000 people to see how they responded. This testing on human subjects may have seemed benign to some, but there was a risk that long-term mental and emotional health was significantly impacted. In 2018, the same complaints were made about the Cambridge Analytica program where a private company harvested millions of Facebook user profiles in order to more accurately market to those individuals.
Studies, including this 2019 study in the Journal ‘Frontiers in Psychology’, have examined the many ethical issues about Facebook’s actions, including how it selected whom to test upon, the intentions of testing on these individuals, and the ethics of doing so on children.

The legal expert pointed out to the Gateway Pundit, “People are using these systems, like ChatGPT, to discuss their mental health. Their responses are being used in their training data. Companies like OpenAI and Anthropic admit user chats may be stored and used for “training.” Yet under IRB standards, that kind of data collection would usually require informed consent forms explaining risks, yet none are provided.”

Keep reading

Neanderthal Workshop Reveals Advanced Tool Maintenance 70,000 Years Ago

Archaeologists in Poland have unearthed compelling evidence of sophisticated Neanderthal behavior at a 70,000-year-old workshop site in the Zwoleńka River Valley. The remarkable discovery demonstrates that these ancient humans operated specialized tool maintenance centers where they repaired and sharpened implements used for butchering massive Ice Age animals including mammoths, rhinoceroses, and horses.

The excavation, conducted jointly by the State Archaeological Museum in Warsaw, the University of Warsaw’s Faculty of Archaeology, and the University of Wrocław’s Institute of Archaeology, represents the most significant Neanderthal research currently underway in Poland. Dr. Witold Grużdź, project manager from the State Archaeological Museum, confirmed that radiocarbon dating places the site’s activity between 64,000 and 75,000 years ago, firmly within the Middle Paleolithic period, reports Science in Poland.

Unprecedented Preservation in Open-Air Environment

What makes this Mazovian site extraordinary is its open-air nature, where organic materials have survived for millennia. Most Neanderthal sites in Poland are concentrated in southern cave systems within the Kraków-Częstochowa Upland and Lower Silesia regions. The Zwoleńka discovery marks the northernmost confirmed Neanderthal presence in the country, in an area that was largely ice-covered during their occupation.

Dr. Katarzyna Pyżewicz from the University of Warsaw emphasized the rarity of such finds:

“Neanderthal discoveries are uncommon, and whatever emerges from this region carries immense scientific value. These archaeological sites typically lie buried several meters beneath the surface, making detection extremely challenging.”

Keep reading

How Much Energy Does ChatGPT’s Newest Model Consume?

  • The energy consumption of the newest version of ChatGPT is significantly higher than previous models, with estimates suggesting it could be up to 20 times more energy-intensive than the first version.
  • There is a severe lack of transparency regarding the energy use and environmental impact of AI models, as there are no mandates forcing AI companies to disclose this information.
  • The increasing energy demands of AI are contributing to rising electricity costs for consumers and raising concerns about the broader environmental impact of the tech industry.

How much energy does the newest version of ChatGPT consume? No one knows for sure, but one thing is certain – it’s a whole lot. OpenAI, the company behind ChatGPT, hasn’t released any official figures for the large language model’s energy footprints, but academics are working to quantify the energy use for query – and it’s considerably higher than for previous models. 

Keep reading

AI-powered stuffed animals are coming for your kids

Do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids?

That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. She recounts a demonstration in which Grem, one of the offerings from startup Curio, tried to bond with her. (Curio also sells a plushie named Grok, with no apparent connection to the Elon Musk-owned chatbot.)

Hess writes that this is when she knew, “I would not be introducing Grem to my own children.” As she talked to the chatbot, she became convinced it was “less an upgrade to the lifeless teddy bear” and instead “more like a replacement for me.”

She also argues that while these talking toys might keep kids away from a tablet or TV screen, what they’re really communicating is that “the natural endpoint for [children’s] curiosity lies inside their phones.”

Keep reading

Texas Is Preparing To Cut Off Power To Data Centers During Grid Emergencies

Over the Fourth of July, deadly floods swept across central Texas, disrupting infrastructure and causing widespread outages. Meanwhile, the Electric Reliability Council of Texas (ERCOT) has already seen multiple price spikes and conservation alerts — not because there wasn’t enough power, but because we couldn’t move it where it was needed.

These aren’t isolated events. It’s not just a Texas problem.

Just days after the shutoff planning was announced, the U.S. Department of Energy warned that blackout risks across the country could rise 100-fold by 2030.

All of this points to a deeper vulnerability: We’re still running the grid with tools and assumptions built for a different era — one with fewer storms, slower load growth and no massive data centers. 

Texas’s new normal demands smarter, faster and more adaptive grid operations. Long-term infrastructure investments are critical, but they won’t arrive in time to manage the next three summers.

Texas has made real progress in building new generation capacity, especially in solar, storage and wind. But the wires that carry that power haven’t changed. More importantly, the way we operate the grid hasn’t evolved to match the demands of either changing weather patterns or electrical load growth.

Now, surging demand from industrial expansion, electrification and AI data centers is doubling the strain. ERCOT’s own projections show that power demand in Texas may nearly double by 2030. And, other regions aren’t immune.

  • The Midcontinent Independent System Operator recently green-lit a $22 billion transmission buildout to relieve rising congestion.
  • The California Independent System Operator saw renewable curtailments surge nearly 30% last year.
  • The PJM Interconnection anticipates 3% to 4% annual peak load growth through 2035 driven by data centers and expects up to 70 GW of demand over the next 15 years.
  • Nationally, U.S. demand is projected to climb about 16% in five years — a pace not seen since the 1980s. 

That means more stress on an already-congested transmission system — one still being managed with decades-old assumptions about heat, wind and demand.

Keep reading

Scientists Say This “Strange Physics Mechanism” Could Enable Objects to Levitate on Sunlight

Designed for flight forty-five miles above the Earth’s surface, Harvard SEAS researchers have devised a nanofabricated lightweight structure capable of sunlight-driven propulsion through a process called photophoresis, capable of monitoring one of Earth’s most challenging locations to navigate.

Stretching between 30 and 60 miles above the Earth’s surface, the mesosphere has proven extremely difficult to study, as the altitude is too high for planes and balloons, yet too low for satellites. Achieving regular direct access to this long-out-of-reach portion of the atmosphere could be a major boon to improving weather forecasts and climate model accuracy.

Now, a new breakthrough technology could make it possible, by allowing lightweight structures to reach largely unexplored heights powered by sunlight alone.

Photophoresis

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), the University of Chicago, and other institutions worked on the project, which was revealed in a new paper published in Nature.

“We are studying this strange physics mechanism called photophoresis and its ability to levitate very lightweight objects when you shine light on them,” said lead author Ben Schafer, a former Harvard graduate student at SEAS, now a professor at the University of Chicago.

Photophoresis is a physical process where gas molecules bounce off of an object’s warmer side more forcefully than its cooler side in extremely low-pressure environments. One such environment is the difficult-to-reach mesosphere.

Keep reading

To Share Weights from Neural Network Training Is Dangerous

Some organizations and researchers are sharing neural network weights, particularly through the open-weight model movement. These include Meta’s Llama series, Mistral’s models, and DeepSeek’s open-weight releases, which claim to democratize access to powerful AI. But doing so raises not only security concerns, but potentially an existential threat.

For background, I have written a few articles on LLMs and AIs as part of my own learning process in this very dynamic and quickly evolving Pandora’s open box field. You can read those herehere, and here.

Once you understand what neural networks are and how they are trained on data, you will also understand what weights (and biases) and backpropagation are. It’s basically just linear algebra and matrix vector multiplication to yield numbers, to be honest. More specifically, a weight is a number (typically a floating-point value – a way to write numbers with decimal points for more accuracy) that represents the strength or importance of the connection between two neurons or nodes across different layers of the neural network. 

I highly recommend watching 3Blue1Brown’s videos to gain a better understanding, and it’s important that you do. 3Blue1Brown’s instructional videos are incredibly good. 

Start with this one.

And head to this one.

The weights are the parameter values determined from data in a neural network to make predictions or decisions to arrive at a solution. Each weight is an instruction telling the network how important certain pieces of information are, like how much to pay attention to a specific color or shape in a picture. These weights are numbers that get fine-tuned during training thanks to all those decimal points, helping the network figure out patterns. Examples include recognizing a dog in a photo or translating a sentence. They are critical in the ‘thinking’ process of a neural network. 

You can think of the weights in a neural network like the paths of least resistance that guide the network toward the best solution. Imagine water flowing down a hill, naturally finding the easiest routes to reach the bottom. In a neural network, the weights are adjusted during training on data sets to create the easiest paths for information to flow through, helping the network quickly and accurately solve problems, like recognizing patterns or making predictions, by emphasizing the most important connections and minimizing errors.

If you’re an electronic musician, think of weights like the dials on your analog synth that allow you to tune into the right frequency or sound to say, mimic a sound you want to recreate, or in fact, create a new one. If you’re a sound guy, you can also think of it like adjusting the knobs on your mixer to balance different instruments.

Keep reading