AI Won’t Replace You, But It Will Spy on You

Since the dawn of the Industrial Revolution, workers have had to contend with the inimical effects of technology on their jobs. From the power loom to the personal computer, each wave of automation has not only increased productivity, but also empowered the owners and managers who dictate how these technologies reshape the workplace. Today, workers worldwide are haunted by the specter of artificial intelligence.

Artificial intelligence has been a mainstay in our popular imagination for decades.  Prognostications of an AI-driven future range from apocalyptic robot takeovers to thriving post-work societies where people live off the wealth produced by machines. In spite of these daydreams, robots with full human cognition are still well within the domain of science fiction.

When people speak of AI today, what they’re most often referring to are machines capable of making predictions through the identification of patterns in large datasets. Despite that relatively rote function, many in the space believe that inevitably AI will become autonomous or rival human intelligence. This raises concerns that robots will one day represent an existential threat to humanity or at the very least take over all of our jobs. The reality is that AI is more likely to place workers under greater surveillance than to trigger mass unemployment.

An overwhelming majority of workers are confident that AI will have a direct impact on their jobs, according to a recent survey by ADP,  but they do not agree on how. Some feel that it will help them in the workplace while 42 percent fear that some aspects of their job will soon be automated.

These concerns are not without merit. Grandiose statements of oncoming job losses made by tech executives in public forums fuel worker anxiety. Feelings of job insecurity are compounded by reports that a majority of US firms are planning to incorporate AI in the workplace within the next year. In fact, Goldman Sachs predictsthat generative AI could “substitute up to one-fourth of current work.”

Keep reading

Anti-Deepfake Porn Bill Unanimously Passes the Senate

The anti-Deepfake porn bill, otherwise known as the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, passed the U.S. Senate unanimously on Thursday with 100 votes.

Introduced by Sen. Dick Durbin (D-IL) and Sen. Lindsey Graham (R-SC) while being sponsored by Rep. Alexandria Ocasio-Cortez (D-NY) from the U.S. House, the bill would essentially allow victims of Deepfake porn to “sue anyone who knowingly creates, receives, or distributes non-consensual, sexually explicit content made with artificial intelligence,” per Gizmodo. Victims will have a 10-year statute of limitations.

“The bill is one of the first to address the damages caused by AI, which is currently a self-regulated industry,” the outlet noted. “It doesn’t allow for criminal prosecution, but hopefully, it’s a first step towards more federal oversight.”

Alexandria Ocasio-Cortez said she pushed the bill after falling victim to Deepfake porn online, which become a significant problem in recent years.

Keep reading

AI Mass Surveillance at Paris Olympics Will Continue Even After Games End

The 2024 Paris Olympics is drawing the eyes of the world as thousands of athletes and support personnel and hundreds of thousands of visitors from around the globe converge in France.

It’s not just the eyes of the world that will be watching. Artificial intelligence (AI) systems will be watching, too.

Government and private companies will be using advanced AI tools and other surveillance tech to conduct pervasive and persistent surveillance before, during and after the Games.

The Olympic world stage and international crowds pose increased security risks so significant that in recent years authorities and critics have described the Olympics as the “world’s largest security operations outside of war.”

The French government, hand in hand with the private tech sector, has harnessed that legitimate need for increased security as grounds to deploy technologically advanced surveillance and data-gathering tools.

Its surveillance plans to meet those risks, including the controversial use of experimental AI video surveillance, are so extensive that the country had to change its laws to make the planned surveillance legal.

The plan goes beyond new AI video surveillance systems. According to news reports, the prime minister’s office has negotiated a provisional decree that is classified to permit the government to significantly ramp up traditional, surreptitious surveillance and information-gathering tools for the duration of the Games.

These include wiretapping; collecting geolocation, communications and computer data; and capturing greater amounts of visual and audio data.

I am a law professor and attorney, and I research, teach and write about privacy, artificial intelligence and surveillance. I also provide legal and policy guidance on these subjects to legislators and others.

Increased security risks can and do require increased surveillance. This year, France has faced concerns about its Olympic security capabilities and credible threats around public sporting events.

Keep reading

Google Plans New Content-Scanning Censorship Tech

Earlier in the year, Google filed an application to patent new methods, systems, and media for what the giant calls “identifying videos containing objectionable content” that are uploaded to a social site or video service.

For example, YouTube – though the filing doesn’t explicitly name this platform.

The patent application, which has just been published this month, is somewhat different from other automated “methods and systems” Google and other giants, notably Microsoft, already have to power their censorship apparatus; with this one, the focus is more on how AI can be added to the mix.

More and more often, various countries are introducing censorship laws where the speed at which content is removed or accounts blocked is a major requirement made of social media companies. Google could have this in mind when the patent’s purpose is said to be to improve on detecting objectionable content quickly, “for potential removal.”

No surprise here, but what should be the key question – namely, what is considered as “objectionable content” – is less of a definition and more a list that can be further expanded, variously interpreted, etc., and the list includes such items as violence, pornography, objectionable language, animal abuse, and then the cherry on top – “and/or any other type of objectionable content.”

The filing details how Google’s new system works, and we equally unsurprisingly learn that AI here means machine learning (ML) and neural networks. This technology is supposed to mimic the human brain but comes down to a series of equations, differentiated from ordinary algorithms by “learning” about what an image (or a video in this case) is, pixel by pixel.

Keep reading

Danger: Artificial Intelligence Data Centers Are Overwhelming The Global Electrical Grid

The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.

The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand.

The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant that has been dormant since the infamous disaster in 1979.

Data centers have had continuous growth for decades, but the magnitude of growth in the still-young era of large language models has been exceptional. AI requires a lot more computational and data storage resources than the pre-AI rate of data center growth could provide.Data storage has becoming an increasingly concerning problem for electrical grids over the years. With the fast rise of artificial intelligence and its demands, experts warn many grids are already near capacity.

Keep reading

‘Amazing’ new technology set to transform the search for alien life

It has produced one of the most consistent sets of negative results in the history of science. For more than 60 years, researchers have tried to find a single convincing piece of evidence to support the idea that we share the universe with other intelligent beings. Despite these decades of effort, they have failed to make contact of any kind.

But the hunt for alien civilisations may be entering a new era, researchers believe. Scientists with Breakthrough Listen, the world’s largest scientific research programme dedicated to finding alien civilisations, say a host of technological developments are about to transform the search for intelligent life in the cosmos.

These innovations will be outlined at the group’s annual conference, which is to be held in the UK for the first time, in Oxford, this week. Several hundred scientists, from astronomers to zoologists, are expected to attend.

Astronomer Steve Croft, a project scientist with Breakthrough Listen, said: “There are amazing technologies that are under development, such as the construction of huge new telescopes in Chile, Africa and Australia, as well as developments in AI. They are going to transform how we look for alien civilisations.”

Keep reading

Ammo Vending Machines Arrive At Grocery Stores In Red States 

Nothing says ‘Merica like supermarkets with automated vending machines stocked with ammunition. A select number of supermarkets across Alabama and Oklahoma have these new machines. This means you can leave the store with milk, eggs, and boxes of 9mm and .223 rounds. 

American Rounds installed AI-powered ammunition vending machines in several Alabama and Oklahoma supermarket stores. These vending machines are said to feature built-in AI technology, card scanning capability, and facial recognition software to verify that buyers are 21 or older and match the identity on the license. 

“Our automated ammo dispensers are accessible 24/7, ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines,” American Rounds notes on its website. 

American Rounds shows six supermarkets, including two Fresh Value stores in Alabama and four Super C Mart stores in Oklahoma, have these new retail automated ammo dispensers. 

In an interview with Newsweek, Grants Magers, CEO of American Rounds, said that the company’s AI-powered ammunition vending machines have recently been expanded to eight across four states. 

“We have over 200 store requests for AARM [Automated Ammo Retail Machine] units covering approximately nine states currently and that number is growing daily,” Magers said. 

Keep reading

Anthony Blinken Reveals Government’s AI Plan To Censor Free Speech

U.S. Secretary of State Anthony Blinken admitted last week that the State Department is preparing to use artificial intelligence to “combat disinformation,” amidst a massive government-wide AI rollout that will involved the cooperation of Big Tech and other private-sector partners.

At a speaking engagement streamed last week with the State Department’s chief data and AI officer, Matthew Graviss, Blinken gushed about the “extraordinary potential” and “extraordinary benefit” AI has on our society, and “how AI could be used to accelerate the Sustainable Development Goals which are, for the most part, stalled.”

He was referring to the United Nations Agenda 2030 Sustainable Development goals, which represent a globalist blueprint for a one-world totalitarian system. These goals include the gai-worshipping climate agenda, along with new restrictions on free speech, the freedom of movement, wealth transfers from rich to poor countries, and the digitization of humanity. Now Blinken is saying these goals could be jumpstarted by employing advanced artificial intelligence technologies.

Listen to Blinken, in the video below, openly describe how the government will use AI to clamp down on the free speech of citizens. (Fast-forward to the 3-minute mark and watch through the 7:07 mark.)

Keep reading

Google News Ranks AI-Generated Spam as Top Results Above Actual Stories

Despite recent algorithm changes aimed at improving search quality, Google News continues to face challenges with AI-generated spam content appearing in its top results, frequently outranking original reporting.

Wired reports that while artificial intelligence becomes more commonplace across the internet, Google News finds itself grappling with a persistent issue: the proliferation of AI-generated spam content in its search results. This problem persists even after the search giant implemented significant algorithm updates earlier this year to combat unhelpful and unoriginal content.

The issue came to light when a journalist, while searching for information on Adobe’s AI policies, stumbled upon a plagiarized article at the top of Google News results. The article, published on a website called Syrus #Blog, was a near-exact copy of a piece originally published by Wired, with only minor alterations in phrasing. Further investigation revealed that this was not an isolated incident, as similar plagiarized content was found in multiple languages, affecting articles from various reputable news sources.

Elizabeth Tucker, a director of product management at Google, had announced in April that the company had completed a major rollout of changes to its search engine ranking system. These updates were intended to reduce low-quality, unoriginal content in search results by 45 percent. However, the persistence of AI-generated spam in Google News suggests that the problem is far from solved.

Lily Ray, senior director of search engine optimization at Amsive, describes this as a “rampant problem” on Google. She notes that many clients have reported instances of their articles being rewritten and republished using AI tools. This practice not only undermines the work of original content creators but also poses a significant challenge to Google’s efforts to provide high-quality, reliable information to its users.

The creators of Syrus #Blog, when contacted, claimed that their content creation process involves AI tools that “analyze and synthesize information from various sources while always respecting intellectual property.” They pointed to a single hyperlink at the bottom of their articles as sufficient attribution. However, SEO experts and content creators argue that this practice falls far short of ethical standards and violates Google’s spam policies.

Keep reading

AI Garbage Pollutes Internet: 42% of Overall Web Traffic Now Generated by Bots

A new report from Akamai Technologies has unveiled alarming statistics about the prevalence and impact of web scraper bots on businesses, particularly in the e-commerce sector. An astounding 42 percent of overall web traffic is now generated by bots, according to the cloud computing giant.

The report reveals that as the internet becomes increasingly dominated by automated traffic, businesses are facing a growing threat from malicious bots. Akamai Technologies, a leading cloud company, has released its latest State of the Internet (SOTI) report, titled “Scraping Away Your Bottom Line: How Web Scrapers Impact e-commerce,” shedding light on the pervasive issue of web scraping bots and their detrimental effects on online businesses.

The report’s findings are striking: bots now account for a staggering 42 percent of overall web traffic, with 65 percent of these bots classified as malicious. This surge in bot activity is particularly concerning for the e-commerce sector, which relies heavily on revenue-generating web applications and is therefore most vulnerable to high-risk bot traffic.

Patrick Sullivan, CTO of Security Strategy at Akamai, emphasized the seriousness of the issue, stating, “Bots continue to present massive challenges resulting in multiple pain points for app and API owners.” He highlighted the various ways in which scraper bots can harm businesses, including data theft and brand impersonation.

The report identifies several key ways in which scraper bots are being weaponized against businesses. These include competitive intelligence gathering, inventory hoarding, and the creation of imposter sites. Such activities not only impact a company’s bottom line but also degrade the customer experience.

One of the most concerning developments highlighted in the report is the rise of AI botnets. These advanced bots have the capability to discover and scrape unstructured data and content, even when it’s presented in inconsistent formats or locations. Moreover, they can leverage business intelligence to enhance their decision-making processes, making them increasingly difficult to detect and mitigate.

Keep reading