Mitt Romney’s AI Bill Seeks to Ban Anonymous Cloud Access, Raising Privacy Concerns

A new Senate bill, the Preserving American Dominance in AI Act of 2024 (S.5616), has reignited debate over its provisions, particularly its push to impose “know-your-customer” (KYC) rules on cloud service providers and data centers. Critics warn that these measures could lead to sweeping surveillance practices and unprecedented invasions of privacy under the guise of regulating artificial intelligence.

We obtained a copy of the bill for you here.

KYC regulations require businesses to verify the identities of their users, and when applied to digital platforms, they could significantly impact privacy by linking individuals’ online activities to their real-world identities, effectively eliminating anonymity and enabling intrusive surveillance.

Keep reading

5G ushers in aggressive growth in energy consumption

5G is in the millimetre wave bands – 24-86 GHz. This slice of radio spectrum can carry large amounts of data, but not nearly as far as the current mobile network at frequencies from 700 megahertz to 6 gigahertz.  What this means is that for 5G, a lot more equipment needs to be installed and potentially more data needs to be processed.

Additionally, it is important to note that millimetre wave communications are prone to interference. For example, radio at above 20 GHz doesn’t go through walls well. It doesn’t go through leaves well. It doesn’t play nicely with rain. What does this mean? Many, many more antennas.

Measuring networking power consumption requires the capacity to determine how much energy wired and wireless networks consume.

“A general concern is that higher data rates can only be achieved by consuming more energy; if the EE [energy efficiency] is constant, then 100× higher data rate in 5G is associated with a 100× higher energy consumption.” This is where headlines like, “Tsunami of data could consume 1/5 of global data by 2025,” come from.

In addition to transmitting or harvesting data, energy can also be moved in 5G networks. With 5G, one of the novel technologies being considered is Radio Frequency (RF) harvesting; converting energy in transmitted radio waves to user devices or even wireless infrastructure (microcells, antenna arrays, etc.). Since RF signals can carry both energy and information, theoretically RF energy harvesting and information reception can be performed from the same RF input signal. This scheme is referred to as the simultaneous wireless information and power transfer (SWIPT).  The hardware to support this doesn’t exist yet, but it has promise.

Our biggest area of concern, however, is in data centres. Radoslav Danilak asserts that data centers will consume exponentially larger amounts of electricity, arguing, “consumption will double every four years.”

Keep reading

US Report Reveals Push to Weaponize AI for Censorship

For a while now, emerging AI has been treated by the Biden-Harris administration, but also the EU, the UK, Canada, the UN, etc., as a scourge that powers dangerous forms of “disinformation” – and should be dealt with accordingly.

According to those governments/entities, the only “positive use” for AI as far as social media and online discourse go, would be to power more effective censorship (“moderation”).

A new report from the US House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government puts the emphasis on the push to use this technology for censorship as the explanation for the often disproportionate alarm over its role in “disinformation.”

We obtained a copy of the report for you here.

The interim report’s name spells out its authors’ views on this quite clearly: the document is called, “Censorship’s Next Frontier: The Federal Government’s Attempt to Control Artificial Intelligence to Suppress Free Speech.”

Keep reading

Drone Detection System Used in Ukraine War May Be Deployed to East Coast

The state-of-the-art Robin Radar System, currently being used in the Ukraine war to detect drones, may soon be deployed to investigate the perplexing mystery drone sightings throughout the East Coast.

ABC News reports that to help crack the enigma of mystery drones spotted all around New England, officials have requested the U.S. Department of Homeland Security (DHS) to deploy the cutting-edge Robin Radar System.

Developed by Robin Radar USA, the drone detection radar system has its roots in technology designed to detect flocks of birds near airports to prevent collisions with aircraft. Since 2014, the company has focused on creating drone-detecting technology that not only locates drones but also classifies small moving objects, distinguishing between drones, rotary, fixed-wing, and even filtering out birds or large insects.

Kris Brost, general director of Robin Radar USA, explained that the radar works by bouncing radio waves off objects, tracking their flying patterns in real-time, and beaming data back to a laptop computer with a 360-degree, 3D view of the airspace. The system has a range of about 3.1 miles and can detect objects flying at altitudes of up to two miles.

One of the key advantages of the Robin Radar System is its lightweight and mobile nature, allowing it to be installed on police cruisers or other vehicles for use on the move. While the system cannot determine if a drone has been weaponized, it can help investigators track the drone’s movements and potentially obtain its remote identification, even if operators attempt to modify their drones to avoid detection.

The Robin Radar System has already proven its worth on the battlefield in Ukraine, aiding Ukrainian military forces in locating incoming Russian weaponized drones. Brost noted that the war in Ukraine was a turning point for the company, highlighting the compelling needs and valuable data collected from the environment.

Keep reading

Kratos Conducts First Test Of Mysterious Stealth Drone

Aviation Week revealed that US-based Kratos Defense successfully tested its Thanatos stealth Uncrewed Combat Air Vehicle (UCAV) prototype within the last few months.

Steve Fendley, president of Kratos’ Unmanned Systems Division, told the aviation news website that the company’s Thanatos stealth UCAV was successfully tested in recent months. However, he declined to provide specifics about the test or its location.

“Kratos unveiled the Thanatos design in November 2023 when company CEO Eric Demarco said in an earnings report that the company hoped to have a contract within a year. The design shows what appears to be a single-engine UCAV with two inlets and a single exhaust,” Aviation Week said, adding, “The aircraft does not have a vertical tail and horizontal stabilizers, showing the company’s stealth approach.”

On Dec. 7, during an interview at the Reagan National Defense Forum at the Ronald Reagan Presidential Library in Simi Valley, California, Fendley said: “The air vehicle for Thanatos is now effectively proven. We’re not trying to figure out does the airplane fly, we’re now trying to figure out does the integrated system tick the mission box.”

Keep reading

Privacy in Pieces: States Scramble to Protect Data as Congress Dithers

As Congress struggles to catch up to the European Union’s comprehensive data privacy regulations, some US states have begun to forge their own robust legislation to increase user protection. But this system only protects the data of some Americans, leaving more than half the country without guaranteed data protection or privacy rights.

And it may take years before a national solution is created, if at all.

The EU took its first step towards providing sweeping privacy protection years ago, with the creation of the region’s General Data Protection Regulation (GDPR).

The GDPR, which took effect in 2018 and gives individuals ownership over their personal information and the right to control who can use it, is often marked as the first major, multinational step towards comprehensive data protection and privacy.

Traditionally, the EU’s approach to data privacy stems from a human rights standpoint and has its roots in World War II, when the Nazi party collected personal data to commit numerous atrocities and, later, when the East German secret police, the Stasi, carried out invasive state surveillance.

After the war ended, the right to privacy was enshrined in the European Convention on Human Rights and later in the EU Charter of Fundamental Rights, becoming the ideological foundation on which data privacy laws have been built in the EU today.

Across the Atlantic, the US Constitution does not explicitly provide a right to privacy.

Rather than enacting a comprehensive federal law, the US federal government has taken a reactive approach, passing legislation only after issues arise in a few specific business sectors, which has resulted in a series of data protection laws addressing specific types of data. For example, the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) have protected medical and financial data respectively since the 1990s.

“The US is very much an innovation, capital-first society,” said Jodi Daniels, founder and CEO of privacy consultancy firm Red Clover Advisors. “And they do want to protect the people, but it has to all get balanced.”

But in recent years, some lawmakers have begun to push back against this system by introducing comprehensive data privacy bills, like the bipartisan American Privacy Rights Act (APRA).

Introduced in April by Sen. Maria Cantwell (D-WA) and Rep. Cathy McMorris Rodgers (R-WA), APRA is like GDPR in that it is not limited to specific business sectors and aims to minimize the amount and types of data companies can collect, give consumers control over their information, and allow them to opt out of targeted advertising.

While the legislation didn’t get very far, stalling in the House Committee on Energy and Commerce, it’s the furthest any comprehensive privacy bill has gone in Congress yet. To become law, however, it would have to be reintroduced next year when Republicans control both chambers. 

Some lawmakers, like Sen. Ted Cruz (R-TX), contend that APRA is more concerned with “controlling the internet” than creating a balance between innovation and privacy protection, and argue that the current right to private action present in the act, which allows individuals to pursue legal action if their privacy is violated, will give overwhelming power to trial lawyers.

Keep reading

Autonomous AI Poses Existential Threat – And It’s Almost Here: Former Google CEO

Former Google CEO Eric Schmidt said that autonomous artificial intelligence (AI) is coming—and that it could pose an existential threat to humanity.

We’re soon going to be able to have computers running on their own, deciding what they want to do,” Schmidt, who has long raised alarm about both the dangers and the benefits AI poses to humanity, said during a Dec. 15 appearance on ABC’s “This Week.”

“That’s a dangerous point: When the system can self improve, we need to seriously think about unplugging it,” Schmidt said.

Schmidt is far from the first tech leader to raise these concerns.

The rise of consumer AI products like ChatGPT has been unprecedented in the past two years, with major improvements to the language-based model. Other AI models have become increasingly adept at creating visual art, photographs, and full-length videos that are nearly indistinguishable from reality in many cases.

For some, the technology calls to mind the “Terminator” series, which centers on a dystopian future where AI takes over the planet, leading to apocalyptic results.

For all the fears that ChatGPT and similar platforms have raised, consumer AI services available today still fall into a category experts would consider “dumb AI.” These AI are trained on a massive set of data, but lack consciousness, sentience, or the ability to behave autonomously.

Schmidt and other experts are not particularly worried about these systems.

Rather, they’re concerned about more advanced AI, known in the tech world as “artificial general intelligence” (AGI), describing far more complex AI systems that could have sentience and, by extension, could develop conscious motives independent from and potentially dangerous to human interests.

Schmidt said no such systems exist today yet, and we’re rapidly moving toward a new, in-between type of AI: one lacking the sentience that would define an AGI, and still able to act autonomously in fields like research and weaponry.

I’ve done this for 50 years. I’ve never seen innovation at this scale,” Schmidt said of the rapid developments in AI complexity.

Schmidt said that more developed AI would have many benefits to humanity—and could have just as many “bad things like weapons and cyber attacks.”

Keep reading

Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI

It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York TimesWall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

Keep reading

Federal Officials Will Deploy High-Tech System To New York After Drones Shut Down Airport, Governor Says

New York Gov. Kathy Hochul announced that the federal government will send a “a state-of-the-art drone detection system” to her state after a number of drone sightings across New York and New Jersey in recent days.

While she did not elaborate on the system that will be deployed, it “will support state and federal law enforcement in their investigations,” she said in a statement on the morning of Dec. 15.

“I am grateful for the support, but we need more. Congress must pass a law that will give us the power to deal directly with the drones,” the governor wrote on social media platform X around the same time. She urged Congress to pass the Counter-UAS Authority Security, Safety, and Reauthorization Act that will give states “the authority and resources required to respond to circumstances like we face today.”

It’s not clear whether the federal government sent a similar system to New Jersey, where most of the drone sightings have occurred, or in other states. Over the weekend, swarms of drones were spotted in other states along the East Coast, including Maryland.

Keep reading

Suspicious OpenAI Whistleblower Death Ruled Suicide

The November death of former OpenAI researcher-turned-whistleblower, 26-year-old Suchir Balaji was ruled a suicide, the San Jose Mercury News reports.

According to the medical examiner, there was no foul play in Balaji’s Nov. 26 death in his San Francisco apartment.

Balaji had publicly accused OpenAI of violating US copyright law with ChatGPT. According to the NY Times;

He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet.

In August, he left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.

If you believe what I believe, you have to just leave the company,” he said during a recent series of interviews with The New York Times.

The Times named Balaji a person with “unique and relevant documents” that the outlet would use in their ongoing litigation with OpenAI – which claims that the company, and its partner Microsoft, are using the world of reporters and editors without permission.

In an October post to X, Balaji wrote: “I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on. I’ve written up the more detailed reasons for why I believe this in my post. Obviously, I’m not a lawyer, but I still feel like it’s important for even non-lawyers to understand the law — both the letter of it, and also why it’s actually there in the first place.”

He then made a lengthy post on his personal blog outlining why he thinks OpenAI violates Fair Use. Four weeks later he was dead.

Balaji, who grew up in Cupertino, California, studied computer science at UC Berkeley – telling the Times that he wanted to use AI to help society.

“I thought we could invent some kind of scientist that could help solve them,” he told the outlet.

Keep reading