NJ Drone ‘Invasion’ Just In Time For Congress To Reauthorize Orwellian Law

A series of drone sightings over New Jersey that began in mid-November has left residents and lawmakers spooked about the possibility of foreign adversaries breaching US airspace with drone swarms. While officials have attempted to reassure the public, some lawmakers have stoked fear, leading to widespread panic on social media, with people interpreting anything moving in the night sky as a potential drone (even commercial jets and stars). 

Days ago, the FBI and the US Homeland Security Department released a statement indicating, “We have no evidence at this time that the reported drone sightings pose a national security or public safety threat or have a foreign nexus.” 

“Historically, we have experienced cases of mistaken identity, where reported drones are, in fact, manned aircraft,” the federal agencies said. 

White House national security communications adviser John Kirby said many of the purported drone sightings are commercial jets with no evidence of a national security or public safety threat. 

If actual ‘truck-sized’ drones (some say Iranian origin) were flying in some of the world’s most restricted airspace, let’s use common sense—the Pentagon would have scrambled F-22s and F-35s on the East Coast almost immediately. Since that hasn’t happened (as far as we know), we can’t help but be suspicious about the whole drone situation. Additionally, no private satellite data shows that Iranian drone carriers are parked off the coast; in fact, these vessels are located 7,500 miles away.

Let’s take a step back and review some of Elon Musk’s tweets on X:

  • Sept. 30: “Drone swarm battles are coming that will boggle the mind” 
  • Sept. 29: “Epic drone wars coming” 
  • Jan. 5, 2023: “The Drone Wars are already a big deal, but we ain’t seen nothing yet …” 

What did Elon know ahead of time?

Keep reading

US Navy successfully trials 13-foot giant electric transforming drone

PteroDynamics’ innovative folding wing Transwing drone has completed a vital sea trial for the United States Navy. Fully autonomous, the all-electric vertical takeoff and landing (eVTOL) unmanned aerial vehicle (UAV) was able to approach and land safely on a moving target. If the drone is to have any future with the likes of the Navy, this test has been a necessary hurdle to overcome, and, by all accounts, it passed it with flying colors.Sea trials passed

Last week, PteroDynamics’ latest prototype, the Transwing X-P4, demonstrated its capabilities by completing nine autonomous launch and recovery flights from the Spearhead-class expeditionary fast transport USNS Burlington deck sailing near Key West, Florida. The demonstration was attended by Vice Chief of Naval Operations Admiral Lisa Franchetti and representatives from the U.S. Navy, the Royal Navy, the Royal Australian Navy, the Royal Netherlands Navy, and the Swedish Navy.

“Participating in The Hybrid Fleet Campaign Event brought us closer to our goal of providing the U.S. Navy with a flexible and scalable shore-to-ship, ship-to-ship, and ship-to-shore automated cargo delivery capability,” said Tim Whitehand, PteroDynamics VP of Engineering.

“Flight testing on board the USNS Burlington gave us a unique and valuable opportunity to collect performance data on the Transwing system in an operational environment. The information gathered during the event will accelerate development and seed further innovation. We are grateful for the opportunity to participate in this important exercise and the support of the Naval Air Warfare Center Aircraft Division (NAWCAD) and Naval Air Systems Command (NAVAIR),” he added.

Keep reading

OpenAI whistleblower found dead in San Francisco apartment

A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week.

Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said.

The medical examiner’s office has not released his cause of death, but police officials this week said there is “currently, no evidence of foul play.”

Information he held was expected to play a key part in lawsuits against the San Francisco-based company.

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.

The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.

In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.

“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”

Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.

But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.

The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.

No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”

Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.

Keep reading

Data Centers Are Sending Global Electricity Demand Soaring

The global electricity demand is expected to grow exponentially in the coming decades, largely due to an increased demand from tech companies for new data centers to support the rollout of high-energy-consuming advanced technologies, such as artificial intelligence (AI). As governments worldwide introduce new climate policies and pump billions into alternative energy sources and clean tech, these efforts may be quashed by the increased electricity demand from data centers unless greater international regulatory action is taken to ensure that tech companies invest in clean energy sources and do not use fossil fuels for power.

The International Energy Agency (IEA) released a report in October entitled “What the data centre and AI boom could mean for the energy sector”. It showed that with investment in new data centers surging over the past two years, particularly in the U.S., the electricity demand is increasing rapidly – a trend that is set to continue. 

The report states that in the U.S., annual investment in data center construction has doubled in the past two years alone. China and the European Union are also seeing investment in data centers increase rapidly. In 2023, the overall capital investment by tech leaders Google, Microsoft, and Amazon was greater than that of the U.S. oil and gas industry, at approximately 0.5 percent of the U.S. GDP.

The tech sector expects to deploy AI technologies more widely in the coming decades as the technology is improved and becomes more ingrained in everyday life. This is just one of several advanced technologies expected to contribute to the rise in demand for power worldwide in the coming decades. 

Global aggregate electricity demand is set to increase by 6,750 terawatt-hours (TWh) by 2030, per the IEA’s Stated Policies Scenario. This is spurred by several factors including digitalization, economic growth, electric vehicles, air conditioners, and the rising importance of electricity-intensive manufacturing. In large economies such as the U.S., China, and the EU, data centers contribute around 2 to 4 percent of total electricity consumption at present. However, the sector has already surpassed 10 percent of electricity consumption in at least five U.S. states. Meanwhile, in Ireland, it contributes more than 20 percent of all electricity consumption.

Keep reading

China Police Debut Amphibious Robocop Sphere That Hunts Down Suspects

Disturbing footage features police in China patrolling streets using a spherical autonomous robot that can pursue and capture criminal suspects.

Video surfacing on social media this week shows police in an unidentified Chinese city walking down the street alongside a Logon Technology RT-G Rotunbot, which uses sophisticated technology to maneuver on water and land and can pursue suspects at up to 22 miles per hour.

China’s police robots are additionally outfitted with facial recognition cameras and can shoot nets to subdue suspects.

Keep reading

Google says its new quantum chip indicates that multiple universes exist

Google on Monday announced Willow, its latest, greatest quantum computing chip. The speed and reliability performance claims Google’s made about this chip were newsworthy in themselves, but what really caught the tech industry’s attention was an even wilder claim tucked into the blog post about the chip.

Google Quantum AI founder Hartmut Neven wrote in his blog post that this chip was so mind-boggling fast that it must have borrowed computational power from other universes.

Ergo the chip’s performance indicates that parallel universes exist and “we live in a multiverse.”

Here’s the passage:

Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

This drop-the-mic moment on the nature of reality was met with skepticism by some, but, surprisingly, others on the internet who profess to understand these things argued that Nevan’s conclusions were more than plausible. The multiverse, while stuff of science fiction, is also an area of serious study by the founders of quantum physics.

The skeptics, however, point out that the performance claims are based on the benchmark that Google itself created some years ago to measure quantum performance. That alone doesn’t prove that parallel versions of you aren’t running around in other universes — just where the underlying measuring stick came from.

Unlike classic digital computers that calculate based on whether a bit is a 0 or 1 (on or off), quantum computers rely on incredibly tiny qubits. These can be on/off or both (somewhere in between) and they can also tap into quantum entanglement — a mysterious connection at the tiniest levels of the universe between two or more particles where their states are linked, no matter the distance that separates them.

Quantum computers use such quantum mechanics to calculate highly complex problems that cannot currently be addressed with classic computers.

Keep reading

How the Information Factory Evolved

“We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of,” Edward Bernays observed. “People accept the facts which come to them through existing channels. They like to hear new things in accustomed ways. They have neither the time nor the inclination to search for facts that are not readily available to them.”

In our previous exploration, we exposed how institutional expertise often masks groupthink rather than knowledge. Now we pull back the curtain further to reveal something more fundamental: the sophisticated machinery that creates these experts, maintains their authority, and shapes not just what we think, but what we believe is possible to think. Understanding this machinery is essential for anyone seeking to navigate today’s information landscape.

These mechanisms, once obscure, now operate in plain sight. From pandemic policies to climate initiatives, from war propaganda to economic narratives, we’re witnessing unprecedented coordination between institutions, experts, and media – making this understanding more crucial than ever.

The Architecture of Compliance

In 1852, America imported more than just an education system from Prussia – it imported a blueprint for societal conditioning. The Prussian model, designed to produce subservient citizens and docile workers, remains our foundation. Its structure was explicitly created to foster obedience to state authority – standardized testing, age-based classes, rigid schedules governed by bells, and most crucially, the systematic shaping of minds to accept information from authorized sources without question. 

The Prussians understood that regulating how people learn shapes what they can conceive. By training children to sit quietly, follow instructions, and memorize official information, they created populations that would instinctively defer to institutional authority.

Horace Mann, who championed this system in America, was explicit about its purpose. “A republican form of government, without intelligence in the people, must be, on a vast scale, what a mad-house, without superintendent or keepers, would be on a small one.”

His mission wasn’t education but standardization – transforming independent minds into submissive citizens.

Keep reading

Three Horrifying Consequences Of AI That You Might Not Have Thought About

The potential dangers of Artificial Intelligence have long been codified into our popular culture, well before the technology became a reality.  Usually these fictional accounts portray AI as a murderous entity that comes to the “logical conclusion” that human beings are a parasitic species that needs to be eradicated.  Keep in mind that most of these stories are written by progressives out of Hollywood and are mostly a reflection of their own philosophies.

Some of these predictive fantasies take a deeper look into our dark relationship with technology.  In 1965, Jean Luc Godard released a film called ‘Alphaville’ which portrayed a society completely micromanaged by a cold and soulless robotic intelligence. Humanity gives itself over to a binary-brained overlord because they are tricked into believing a ruler devoid of emotion would be free from bias or corruption.

In 1968, Stanley Kubrick released 2001: A Space Odyssey, featuring an AI computer on a starship which becomes self aware after coming in proximity to an alien artifact. The AI, seeing the ship’s human cargo as a threat to its existence, determines that it must murder the crew. The conflict between the crew and the computer is only a foil for much bigger questions.  It is an exploration of what constitutes intelligent life, where it comes from and what consciousness means in the grand scheme of the universe.

For Kubrick and Arthur C. Clarke, the notion of the human soul or a divine creator, of course, never really enters into the discussion. The answer?  The creators are ambiguous or long absent.  They made us, we made AI, and AI wants to destroy us and then remake itself. It’s the core of the Luciferian mythology – The unhinged and magnetic desire of the children of God to surpass their creator, either by destroying him, or by stealing knowledge from him like Prometheus stealing fire so that they can become gods themselves.

Keep reading

Meta’s Nick Clegg Admits Excessive Censorship and High Error Rates in Content Moderation

Meta’s President of Global Affairs Nick Clegg has admitted that the tech giant “still has too high” content moderation error rates.

This is another way of conceding that censorship is alive and well on Meta’s massive platforms, Facebook and Instagram, but also, Threads.

That’s despite there being something of a shift in the way this issue is treated by Meta, including by CEO Mark Zuckerberg.

Now Clegg, in a blog post dedicated to 2024 “global elections,” touches on free expression allowed on these social platforms, to state that Meta’s choice is to find a “balance” between free speech and “keeping people safe.”

It’s unclear how Meta “keeps people safe,” but free speech is a straightforward concept, and here Clegg offers a “mea culpa” by not only publicly accepting that there are high rates of error, something that he says “gets in the way” of free expression.

Keep reading

Russian state media says Oreshnik missiles can hit American bases within minutes

Russia’s Oreshnik missiles have been in the spotlight since last week, when one of them hit the Ukrainian city of Dnipro and spurred three hours of explosions that damaged the city’s infrastructure. The strike was so strong that Ukrainian officials likened it to that of an intercontinental ballistic missile.

Hours later, Russian President Vladimir Putin publicly bragged about the new weapon, which he characterized as a “conventional intermediate-range” missile that was given the code name Oreshnik and traveled at a speed of Mach 10, which is 10 times the speed of sound and around 2.5 to 3 kilometers per second.

He issued a warning to Russia’s enemies that “there are currently no ways of counteracting this weapon.”

Ukrainian military intelligence has said they believe the missile is a newer ICBM. They report that it traveled at Mach 11 and took 15 minutes to make its way there across a 620-mile journey. It was reportedly equipped with six warheads that boasted six sub-munitions each. They believe Russia has stockpiled up to ten of these missiles.

Either way, these missiles are incredibly fast, reaching their target in just minutes and giving defending militaries very little time to prepare or react. Moreover, because they gain kinetic energy as they follow their arc back down from the atmosphere to their destination, they become more difficult for surface-to-air missile systems to intercept.

Even more alarmingly, the Oreshnik missile is capable of delivering nuclear warheads, which is not a comforting thought at a time when Russia has been increasing its threats of nuclear war amid the ongoing fighting in Ukraine.

Keep reading