The Great AI Invasion: Given Enough Time, Artificial Intelligence Would Take Over Every Area Of Our Lives

Artificial intelligence is changing our world at a pace that is absolutely breathtaking.  If you would have asked me a decade ago if I would live to see artificial intelligence create a world class piece of art or a full-length feature film, I would have said no way.  But now those are simple tasks for artificial intelligence to accomplish.  So what is going to happen once AI becomes millions of times smarter and millions of times more powerful than it is today?  Given enough time, AI would take over every area of our lives.  Our world is definitely crazy right now, but fifty years from now it would resemble something out of an extremely bizarre science fiction novel if AI is allowed to continue to develop at an exponential rate.

Unfortunately, only a very small minority of the population is even concerned about the potential dangers posed by AI, and that is a problem.

Needless to say, the growth of AI has enormous implications for our economy.

AI can already perform most simple tasks much better and much faster than human workers can, and multiple studies have concluded that millions of jobs are at risk of being lost.  The following comes from Fox News

For example, in March 2023, technology firm OpenAI released a report that found at least 80% of the U.S. labor force could have at least 10% of their work-related tasks affected by the introduction of GPT, while another 19% of employees may see at least 50% of these work-related tasks impacted. While GPT influence impacts all wage levels, the higher-income jobs potentially face the greatest exposure, concludes OpenAI.

Also in March 2023, researchers at investment banker Goldman Sachs, after collecting data on occupationally-oriented tasks in Europe and the U.S., found that roughly two-thirds of current occupations are exposed to varying degrees of generative AI automation (such as found in ChatGPT), and that AI could substitute for nearly one-fourth of current work performed.

In July 2023, the McKinsey Global Institute issued a report estimating that without generative AI, automation could take over tasks accounting for 21.5% of the hours worked in the U,S. economy by 2030; but with generative AI, that share increased to 29.5%.

So what would happen to all of the workers that would no longer be needed once AI starts taking over most of our jobs?

I think that is a question that all of us should be asking.

Artificial intelligence also threatens to transform our personal relationships.

Keep reading

Artificial Intelligence Goes to War

Uh… gulp… you thought it was bad when that experienced pilot ejected from one of the Air Force’s hottest “new” planes, the F-35 combat fighter, near — no, not China or somewhere in the Middle East — but Charleston, South Carolina. The plane then flew on its own for another 60 miles before crashing into an empty field. And that was without an enemy in sight.

Perhaps we should just be happy that an F-35 ever even made it into the air, given its endless problems in these years. After all, as Dan Grazier of the Center for Defense Information wrote, it’s now “the largest and most expensive weapons program in history.” Yet when it comes to something as significant as “mission availability,” according to the Congressional Budget Office, only about 26% of all F-35s, each of which now costs an estimated $80 million to produce and $44,000 an hour to fly, are available at any moment. Not exactly thrilling, all in all.

Keep reading

Washington U. Prof: AI Girlfriends Are Ruining a Generation of Men

The rise of AI girlfriends is ruining an entire generation of young men by fostering a silent epidemic of loneliness, according to Washington University Professor of Data Science Liberty Vittert.

There are now apps that offer virtual girlfriends for men who want an AI lover to talk to them, allow them to live out their sexual fantasies, and learn, through data, exactly what they like, according to a op-ed written by Washington U. professor Liberty Vittert and published by the Hill.

These apps reportedly have millions of users, who are able to choose the physical attributes and personalities of their virtual girlfriends.

Some of the artificial lovers are even based on real people. One online influencer, for example, created an AI bot of herself and gained over 1,000 users in less than a week. She believes the AI girlfriend version of herself can generate $5 million a month.

Keep reading

How the “Surveillance AI Pipeline” Literally Objectifies Human Beings

The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found. Crucially, the study found that computer vision papers often refer to human beings as “objects,” a convention that both obfuscates how common surveillance of humans is in the field, and objectifies humans by definition.

“The studies presented in this paper ultimately reveal that the field of computer vision is not merely a neutral pursuit of knowledge; it is a foundational layer for a paradigm of surveillance,” the study’s authors wrote. The study, which has not been peer-reviewed yet, describes what the researchers call “The Surveillance AI Pipeline,” which is also the title of the paper.

The study’s lead author Pratyusha Ria Kalluri told 404 Media on a call that she and her co-authors manually annotated 100 computer vision papers and 100 patents that cited those papers. During this process, the study found that 90 percent of the papers and patents extracted data about humans, and 68 percent reported that they specifically enable extracting data about human bodies and body parts. Only 1 percent of the papers and patents stated they target only non-humans.

Keep reading

NASA to Use Artificial Intelligence to Better Track and Monitor UFO’s

As inklings of extraterrestrial life continue to make headlines, the National Aeronautics and Space Administration (NASA) will begin to use advancements in artificial intelligence to better monitor the skies in the hopes that non-human eyes may help them understand unidentified flying object (UFO) sightings and other events that may indicate a non-human presence.

NASA said that artificial intelligence (AI) will be “essential” in fully understanding the data surrounding unidentified anomalous phenomena and their origins in talks that followed the release of their highly anticipated UFO report.

The report did not conclude one way or the other whether NASA believes UFO’s are of extraterrestrial origin, but in a press briefing on September 14 the Administrator of NASA emphasized that the agency would continue to use all the resources at its disposal to prove or disprove that the unidentified objects showing up all over American military radar and otherwise baffling the world’s best scientists are of extraterrestrial origin. These resources now include AI programs that can comb through very large datasets for information a human might miss or take much longer to find.

“We will use AI and machine learning to search the skies for anomalies… and will continue to search the heavens for habitable reality,” NASA Administrator Bill Nelson said. “AI is just coming on the scene to be explored in all areas, so why should we limit any technological tool in analyzing, using data that we have?”

NASA administrators emphasized both in the report and press briefing that data surrounding unidentified anomalous phenomenas (UAP’s) and UFO’s is often very hard to analyze or quantify partly because of the nature of the topic and partly because it’s a very large swath of data. By using new tools made possible by artificial intelligence, NASA believes they can find patterns or anomalies in data that humans have thus far been unable to find.

Keep reading

In win for AI, Newsom vetoes driverless truck ban

Gov. Gavin Newsom made his choice Friday when confronted with a bill that pitted organized labor versus the tech and business community promoting artificial intelligence. 

Newsom chose AI. 

He vetoed AB316, which would have required a human driver to be present in the deployment of driverless heavy-duty trucks for at least five years as the state evaluates their safety. 

Newsom called the proposed legislation “unnecessary” for the oversight of autonomous trucks,  writing in his veto message that “existing law provides sufficient authority to create the appropriate regulatory framework.” 

Newsom’s veto goes against the wishes of the International Brotherhood of Teamsters union, which has 250,000 members in the state and whose president, Sean O’Brien, traveled to California this week for rallies in favor of the legislation. Union officials said the measure eventually could put  350,000 drivers out of work in California.

In his veto message, Newsom said he was directing the state’s Labor and Workforce Development Agency “to lead a stakeholder process next year to review and develop recommendations to mitigate the potential employment impact of testing and deployment of autonomous heavy-duty vehicles.”

The veto drew a swift rebuke Friday from Lorena Gonzalez, leader of the 2.1 million-member California Labor Federation, who called out Newsom for siding with his business world allies over his friends in labor. 

Keep reading

PENTAGON’S BUDGET IS SO BLOATED THAT IT NEEDS AN AI PROGRAM TO NAVIGATE IT

AS TECH LUMINARIES like Elon Musk issue solemn warnings about artificial intelligence’s threat of “civilizational destruction,” the U.S. military is using it for a decidedly more mundane purpose: understanding its sprawling $816.7 billion budget and figuring out its own policies.

Thanks to its bloat and political wrangling, the annual Department of Defense budget legislation includes hundreds of revisions and limitations telling the Pentagon what it can and cannot do. To make sense of all those provisions, the Pentagon created an AI program, codenamed GAMECHANGER. 

“In my comptroller role, I am, of course, the most excited about applying GAMECHANGER to gain better visibility and understanding across our various budget exhibits,” said Gregory Little, the deputy comptroller of the Pentagon, shortly after the program’s creation last year. 

“The fact that they have to go to such extraordinary measures to understand what their own policies are is an indictment of how they operate,” said William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft and expert on the defense budget. “It’s kind of similar to the problem with the budget as a whole: They don’t make tough decisions, they just layer on more policies, more weapons systems, more spending. Between the Pentagon and Congress, they’re not really getting rid of old stuff, they’re just adding more.”

Keep reading

Senators Call For Government Power To Hand Out Licenses To AI Companies, Curb “Deceptive” Election-Related AI and “Deepfakes”

This week, a Senate Judiciary hearing under the umbrella of the Privacy, Technology and Law Subcommittee became the stage for bipartisan senators to divulge plans aiming to focus on the allegedly looming threats of manipulative artificial intelligence, especially in the realm of elections. Visions for a framework proposed by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), foresee a new government agency, tasked with issuing licenses to entities working with AI systems.

Simultaneously, Senator Amy Klobuchar (D-Minn.) unveiled preliminary details of upcoming legislation, crafted in tandem with Hawley, along with Senators Chris Coons (D-Del.), and Susan Collins (R-Maine). This new proposal targets the prospects of AI technologies pervading the electoral process.

Apprehension regarding deceptive generative AI undermining democratic elections took center stage during the Senate hearing, with Klobuchar expressing a sense of urgency given the rapidly approaching electoral calendar.

Specifically, the newly minted legislation, coined the Protect Elections from Deceptive AI Act, is envisioned to clamp down on AI-assisted impersonation of federal political aspirants in campaign ads.

Rendered as an amendment to the Federal Election Campaign Act of 1971, this legislation provides a legal recourse in federal court for targeted candidates to counter harmful AI-generated deceptive content.

Keep reading

Ex-Google executive fears AI will be used to create ‘more lethal pandemics’

A former Google executive who helped pioneer the company’s foray into artificial intelligence fears the technology will be used to create “more lethal pandemics.”

Mustafa Suleyman, co-founder and former head of applied AI at Google’s DeepMind, said the use of artificial intelligence will enable humans to access information with potentially deadly consequences.

“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible,” Suleyman said The Diary of a CEO podcast on Monday.

“They can spread faster or [be] more lethal…They cause more harm or potentially kill, like a pandemic,” he added, calling for tighter regulation on AI software.

Suleyman said his biggest fear is that within the next five years a “kid in Russia” could genetically engineer a pathogen and unleash it so as to trigger a pandemic that’s “more lethal” than anything the world has seen thus far.

“That’s where we need containment. We have to limit access to the tools and the know-how to carry out that kind of experimentation,” he said.

Keep reading

Amazon Is Filled with Bogus AI-Generated Mushroom Foraging Books that Could Cause Poisoning Deaths

A surge in AI-generated mushroom foraging books on Amazon has raised alarms among experts, who warn that such guides, filled with misinformation about poisonous mushrooms, could pose life-threatening risks to consumers.

404 Media reports that The New York Mycological Society has raised an alarm over the increasing number of AI-generated mushroom foraging books appearing on Amazon. According to the society, these books could pose serious risks to public health. “These AI-generated foraging books could actually kill people if they eat the wrong mushroom because a guidebook written by an AI prompt said it was safe,” the NYMS stated on social media.

Sigrid Jakob, president of the New York Mycological Society, elaborated on the risks involved in using AI-generated foraging guides. “There are hundreds of poisonous fungi in North America and several that are deadly,” Jakob said. “They can look similar to popular edible species. A poor description in a book can mislead someone to eat a poisonous mushroom.”

Text detection tools have indicated that many of these books are predominantly written by AI, with some showing more than 85 percent AI-generated content. Despite this, these books are often marketed as if they were written by humans, making it challenging for consumers to identify their true origin.

In response to the issue, Amazon has removed some of the flagged AI-generated books from its platform. “All publishers in the store must adhere to our content guidelines, regardless of how the content was created,” said Amazon spokesperson Ashley Vanicek. “We’re committed to providing a safe shopping and reading experience for our customers and we take matters like this seriously.”

Keep reading