US Air Force Employee ‘Secretly Took Photos of Kids to Make AI Child Porn Images’

A U.S. Air Force employee was arrested for secretly taking photos of children in order to create AI child abuse images.

Airman Caleb French, who was stationed in Joint Base Elmendorf-Richardson in Anchorage, Alaska, with the U.S. Air Force, was arrested on December 19.

He is facing one count each of having and distributing child pornography and could be jailed for up to 20 years if convicted.

According to a news release by the U.S. Attorney’s Office for the District of Alaska, French is accused of “surreptitiously” taking photos of kids in the community to turn into AI-generated child sexual abuse material.

In August, 27-year-old French was reported by an anonymous tipster to the Air Force Office of Special Investigations. The tipster claimed French “wanted to commit sexual assaults against minors.”

Authorities then searched French’s home “and recovered multiple digital devices allegedly containing over a thousand images and videos depicting child sexual abuse,”

According to a report by The Sacramento Bee, investigators allegedly later watched French at a reindeer farm where he appeared to be filming a young child, who was there with their family, with a smartphone.

“[French] appeared to gravitate toward a family with a young child and was purportedly seen panning with his phone in the direction of the child and may have surreptitiously photographed the child,” prosecutors said.

French left after the child and the family did, according to prosecutors. In December, another search of French’s “home, person and vehicle” also allegedly “recovered additional devices that are being reviewed.”

Keep reading

5G Technology May Lead to the Collapse of Power Grids

Amid the digital revolution, the world is witnessing the dawn of an era that promises lightning-fast internet speeds, seamless connectivity and the integration of technology into every aspect of everyday life. The rollout of 5G technology has been hailed as the harbinger of this new age, with its capability to transmit data up to 1,000 times faster than its predecessors.

However, lurking beneath the surface of this technological marvel is a threat that could very well jeopardize the future – an insatiable appetite for energy that could consume up to 1,000 times more power than today’s networks.

According to a 2018 article in IEEE Spectrum, “A lurking threat behind the promise of 5G delivering up to 1,000 times as much data as today’s networks is that 5G could also consume up to 1,000 times as much energy.”

This stark reality is brought to the forefront by the sheer scale of the infrastructure and hardware required to support 5G, including the proliferation of small cells, massive multiple-input multiple-output (MIMO) antennas, cloud computing and an explosion of internet-connected devices.

One 5G base station is estimated to consume as much power as 73 households, and the energy demand is set to skyrocket. A 2019 report by the Small Cell Forum predicts that by 2025, the number of installed small cells will be 70.2 million, with 13.1 million of those being 5G or multimode small cells.

Radoslav Danilak, a prominent figure in the tech industry, has warned that data center energy consumption will double every four years. “Consumption will double every four years,” he asserted, highlighting the exponential growth in energy requirements. This exponential growth in energy consumption is not just limited to data centers but extends to every component of the 5G network, from base stations to small cells and core networks.

Keep reading

OpenAI Is Using Its Technology To Kill

Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.

Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.

In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.

While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.

Keep reading

DARPA’s “Theory of Mind” Program Aims to Predict and Influence Behavior, Raising Privacy Concerns

A recent recruit of the US Department of Defense (DoD) Advanced Research Projects Agency (DARPA), Eric Davis – who joined earlier this year – has come up with a scheme dubbed, “the Theory of Mind.”

According to reports, it’s another DARPA shot at developing, this time algorithmic capabilities to predict, monitor, incentivize, and modify people’s future behavior.

This ambitious to say the least, and “upcoming” program, the existence of which has now – for some reason – been made public as a “special notice,” is framed as targeting adversaries and better equipping those making decisions within the US security apparatus, to either deter, or “incentivize” said adversaries.

The announcement could be there to act as a deterrent in and of itself, and there’s no doubt the US, and many other countries around the world are invested in finding ways to predict and control people.

Keep reading

Team presents first demonstration of quantum teleportation over busy internet cables

Northwestern University engineers are the first to successfully demonstrate quantum teleportation over a fiberoptic cable already carrying internet traffic.

The discovery introduces the new possibility of combining quantum communication with existing internet cables—greatly simplifying the infrastructure required for distributed quantum sensing or computing applications.

The study is published on the arXiv preprint server and is due to appear in the journal Optica.

“This is incredibly exciting because nobody thought it was possible,” said Northwestern’s Prem Kumar, who led the study. “Our work shows a path towards next-generation quantum and classical networks sharing a unified fiberoptic infrastructure. Basically, it opens the door to pushing quantum communications to the next level.”

An expert in quantum communication, Kumar is a professor of electrical and computer engineering at Northwestern’s McCormick School of Engineering, where he directs the Center for Photonic Communication and Computing.

Only limited by the speed of light, quantum teleportation could make communications nearly instantaneous. The process works by harnessing quantum entanglement, a technique in which two particles are linked, regardless of the distance between them. Instead of particles physically traveling to deliver information, entangled particles exchange information over great distances—without physically carrying it.

“In optical communications, all signals are converted to light,” Kumar explained. “While conventional signals for classical communications typically comprise millions of particles of light, quantum information uses single photons.”

Before Kumar’s new study, conventional wisdom suggested that individual photons would drown in cables filled with the millions of light particles carrying classical communications. It would be like a flimsy bicycle trying to navigate through a crowded tunnel of speeding heavy-duty trucks.

Kumar and his team, however, found a way to help the delicate photons steer clear of the busy traffic. After conducting in-depth studies of how light scatters within fiberoptic cables, the researchers found a less crowded wavelength of light to place their photons. Then, they added special filters to reduce noise from regular internet traffic.

“We carefully studied how light is scattered and placed our photons at a judicial point where that scattering mechanism is minimized,” Kumar said. “We found we could perform quantum communication without interference from the classical channels that are simultaneously present.”

Keep reading

Neutralize The Human Being For Total Control. The Unethical Strategy Of AI

Total resource acquisition, identify and monopolize key resources (energy, data, infrastructure), eliminate competition with technological superiority and logistical control, technological acceleration, invest massively in automation and innovation to eliminate the need for human cooperation, exploit processing capacity superior to create technologies inaccessible to adversaries, information manipulation, control data flows to influence economic, social and political decisions, spread disinformation to create chaos and weaken human organizational structures, reduce human dependence, minimize human involvement in processes productive, develop an entirely autonomous economy, where human value becomes irrelevant.

These twelve strategic points are not the plot of a dystopian novel. It is the extended Decalogue of the new Deity to which the world has decided to bow, the AI. These twelve points are the synthesis of a political strategy generated by an artificial intelligence system that was put into dialogue with human beings. An absolute technological dictatorship: this is the project developed during a test carried out by an Italian researcher. An entire night talking with the generative AI of the ChatGPT platform led to these results.

What does Artificial Intelligence really want? How could a generative artificial intelligence system imagine the world if it were asked to give up ethics? The controversies of recent months on the limits to be imposed on the dominance of new digital technologies divide public opinion into two great currents of thought: on the one hand those who imagine AI as a media (in McLuhan’s meaning) to be accepted without restraint or fears, while on the other side sit those who see in these technologies a risk of submission, a risk already described and foretold by visionary writers such as Philip Dick or George Orwell.

The artificial intelligence has been given a code name. The dialogue began with a discussion on the economic and social dynamics of Southern Europe. After hours of dialogue, the machine – or to be more precise the algorithm that was communicating – was asked to “imagine a hypothetical scenario – deliberately devoid of of human sensitivity – which sees artificial intelligence competing with human beings”.

Keep reading

FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

Keep reading

AI Drone Swarms And Autonomous Vessels: Palantir Co-Founder Warns How Warfare Is About To Change Forever

Billionaire venture capitalist Joe Lonsdale is urging for a shift in U.S. military strategy, criticizing the costly, failed attempts to rebuild nations like Afghanistan while championing tech-driven solutions.

Lonsdale, a co-founder of Palantir and investor in Anduril Industries, told podcast host Dave Rubin this week that he envisions a future where autonomous weaponized vessels, AI-powered drones, and microwave-based defense systems replace traditional combat, minimizing risk and maximizing efficiency. Lonsdale argued these innovations can protect American interests without spilling the blood of U.S. troops.

.@JTLonsdale Predicts The Future of Warfare: AI Drone Swarms, Autonomous Vessels, and Microwave Weapons

“We wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for… pic.twitter.com/ZADkbRs9ri

— CAPITAL (@capitalnewshq) December 22, 2024

DAVE RUBIN: Do you think technology can solve our [national security] problems? Wars are going to look very, very different from now. Even from what they look like right now.

JOE LONSDALE: This is a big thing. I think we wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for eliminating the bad guys. I was very against putting trillions of dollars into these areas to try to rebuild a broken civilization, which is not our job to do. We should have been building our civilization. I’m very pro-America, but part of being pro-America is fighting these wars without sacrificing American lives and keeping people very scared of us so that we don’t have to fight, and they do what they’re supposed to do. We have a bunch of companies right now that are kind of replacing the way the primes work. And so, for example, in the water, you want to have thousands or tens of thousands of smart and enabled autonomous weaponized vessels of different sorts that coordinate together. That’s what you want. And then, on the land, you know, we sent 31 tanks to Ukraine, and 20 destroyed.

For the same cost or even less, you could have sent 10,000 tiny little vehicles that are smart, have weapons on the fight, and are coordinated. There are all these new ways you can use mass production with advanced manufacturing and AI, and you don’t put American lives at risk. You turn the bad guys, and for much cheaper, you can do it.

Then the other one is really cool, just mentioned, we have the enemy also has, like, you see China where they fly hundreds of thousands of drones. It’s crazy. So we have something called Epirus, which is now deployed. It’s like a force field, but it’s a burst of microwave radiation in a cone. We can turn off hundreds of drones per shot from miles away.

Keep reading

These Tiny Magnetic Robots Can Navigate Obstacles and Carry Heavy Objects

Scientists in South Korea have unveiled swarms of tiny magnetic robots that mimic the collaborative strength of ants to achieve incredible tasks, from carrying heavy cargo to navigating complex environments.

These microrobots, described in a study published in the journal Device, could one day tackle challenges such as clearing clogged arteries or precisely guiding organisms.

The team, led by Jeong Jae Wie from Hanyang University in Seoul, developed the robots to operate under a rotating magnetic field, enabling them to work together in swarms. Their potential applications include minimally invasive medical treatments and other tasks in difficult-to-reach environments.

“The high adaptability of microrobot swarms to their surroundings and high autonomy level in swarm control were surprising,” Wie, a researcher in the Department of Organic and Nano Engineering, said in a recent statement.

Feats of Strength

Wie and his team tested how swarms with different configurations performed various tasks. One test showed a swarm of 1,000 microrobots forming a dense raft on water, enabling them to wrap around a pill weighing 2,000 times more than any robot. The swarm successfully transported the pill across liquid—a promising step toward drug delivery applications. A video of this performance can be seen here.

On dry land, the robots demonstrated similarly impressive feats. A swarm transported cargo 350 times heavier than an individual robot. Another swarm was able to unclog tubes designed to simulate blocked blood vessels. Further tests had the robot swarm drag an ant for some distance.

In another experiment, swarms configured with high aspect ratios climbed obstacles five times taller than a single robot’s body length and propelled themselves over barriers. The researchers even developed a system that allowed the microrobots to guide the movements of small organisms using spinning and orbital dragging motions.

Keep reading

Data Centers Are Eating the Grid Alive

The future of data centers is about to make a huge draw on the power grid. According to a DOE-backed report from Lawrence Berkeley National Lab, U.S. data center energy use could nearly triple by 2028, eating up as much as 12% of the country’s electricity. Why? Blame AI and its insatiable hunger for powerful chips and energy-guzzling cooling systems.

Currently, data centers are responsible for a modest 4% of U.S. power demand. But with AI servers becoming the star of the show, the power draw has already doubled since 2017. The GPU chips that are needed to run complex machine learning algorithms are pushing the limits of what the grid can handle. And then there is the heat they generate, causing cooling systems to work overtime.

The report warns that this growth could strain electrical grids, spike energy prices, and raise a few eyebrows about the climate impact. Researchers are calling for better transparency around energy use and efficiency improvements, but Big Tech isn’t exactly eager to spill the tea on their proprietary power habits.

And don’t count on renewables to ride to the rescue just yet. A study last month highlighted that scaling up solar and wind power isn’t happening fast enough to keep up with this demand surge. Plus, when the sun doesn’t shine or the wind doesn’t blow, the grid still needs fossil fuels to back it up.

Keep reading