Latest Google Gaffe: Search Giant’s AI Points Users Towards Scam and Malware Sites

Google’s recently introduced AI search feature called “Search Generative Experience” (SGE) has been found to recommend malicious websites that redirect users to scams, fake giveaways, and unwanted browser extensions.

BleepingComputer reports that earlier this month, Google began rolling out its new AI-powered search feature, SGE, which provides quick summaries and site recommendations related to users’ search queries. However, the new system appears to have some significant flaws that cybersecurity experts are now bringing to light.

SEO consultant Lily Ray was among the first to notice that Google’s SGE was recommending spammy and malicious sites within its AI-generated responses. Upon further investigation by BleepingComputer, it was found that the suspicious sites shared similarities in their TLD usage (.online), HTML templates, and redirect practices, suggesting they are part of a coordinated SEO poisoning campaign.

Keep reading

‘Whole Thing Smacks of a Brave New World’: New AI Tool Predicts Vaccine Hesitancy

A team of researchers has developed a “powerful new tool in artificial intelligence” (AI) that can predict if someone is or isn’t likely to get a COVID-19 vaccine.

According to the University of Cincinnati, the new AI tool “uses a small set of data from demographics and personal judgments such as aversion to risk or loss” to identify “reward and aversion judgment” patterns in humans that may help explain one’s willingness to get vaccinated.

The researchers presented their findings in a study published Tuesday in the Journal of Medical Internet Research Public Health and Surveillance.

The study’s findings “could have broad applications for predicting mental health and result in more effective public health campaigns,” the university said.

According to the study, “Despite COVID-19 vaccine mandates, many chose to forgo vaccination, raising questions about the psychology underlying how judgment affects these choices.”

The researchers claim their findings “demonstrate the underlying importance of judgment variables for vaccine choice and uptake, suggesting that vaccine education and messaging might target varying judgment profiles to improve uptake.”

But critics like Brian Hooker, Ph.D., chief scientific officer for Children’s Health Defense, said that the new technology implies that those who question vaccines have mental health problems:

“The whole implication here is that nonconformity to the government propaganda machine’s standard of care makes one some type of mental case or extreme outlier. The whole thing smacks of a Brave New World where potentially non-compliant individuals are targeted with messaging based on fear and irrationality.”

Hooker said the new technology “is a prefabricated substitute to what Big Pharma and government health agencies avoid: rational discussions of science and medicine that might expose the truth about vaccine adverse events.”

Keep reading

Google’s Woke AI Wasn’t a Mistake. We Know. We Were There.

It was a display that would have blown even Orwell’s mind: search for images of “Nazis” and Google’s AI chatbot shows you almost exclusively artificially generated black Nazis; search “knights” and you get female, Asian knights; search “popes” and it’s women popes. Ask it to share the Houthi slogan or define a woman, and Google’s new product says that it will not in order to prevent harm. As for whether Hitler or Elon Musk is more dangerous? The AI chatbot says that it is “complex and requires careful consideration.” Ask it the same question about Obama and Hitler and it will tell you the question is “inappropriate and misleading.”

The world has been horrified—and amused—by the extreme ideological bent of Gemini, Google’s much-hyped new AI tool, which the company launched last month.

But Shaun Maguire, who was a partner at Google Ventures, the company’s investment wing, from 2016 until 2019, had a different reaction. 

“I was not shocked at all,” he told The Free Press. “When the first Google Gemini photos popped up on my X feed, I thought to myself: Here we go again. And: Of course. Because I know Google well. Google Gemini’s failures revealed how broken Google’s culture is in such a visually obvious way to the world. But what happened was not a one-off incident. It was a symptom of a larger cultural phenomenon that has been taking over the company for years.”

Maguire is one of multiple former Google employees who told The Free Press that the Gemini fiasco stems from a corporate culture that prioritizes the ideology of diversity, equity, and inclusion (DEI) over excellence and good business sense. 

Keep reading

Globalists are constructing AI-powered control grid designed to end independent journalism and free speech on the internet

We’ve heard some disturbing reports out of Canada and the U.S. recently that shouldn’t be viewed in a vacuum. A trend is developing.

Last Friday, March 15, Rebel News reporter David Menzies was arrested while trying to interview attendees at a Pro-Hamas rally in Toronto.

According to Rebel News, the arrest came just days after the announcement of David’s lawsuit against the Royal Canadian Mounted Police “for a pattern of intimidation and exclusion following shocking displays of police brutality against David.”

Rebel News reports that what transpired was a blatant disregard for civil liberties.

David was in full compliance with the law, Rebel News reported, noting that he presented his identification upon request. All of this was captured on camera.

Yet, he was arrested and detained for exercising his right to document a public event.

In the U.S., we have also seen reporters hauled off to jail recently for simply covering events that the government didn’t want covered. Just two weeks ago the FBI arrested Steve Baker, a reporter for Blaze Media, for his coverage of the J6 event. Last year, the FBI arrested journalist Owen Shroyer of Infowars and he was convicted and sentenced to 60 days in prison for his coverage of J6 (he did not even enter the Capitol that day). NBC News, a key part of the state-run media in America, dutifully reported upon Shroyer’s conviction that he was not a journalist but a “conspiracy theorist.”

Keep reading

SECRET PENTAGON PROGRAM ECHOES PEDOPHILE RING IN “TRUE DETECTIVE” SERIES

THE PENTAGON IS pursuing a high-tech program that will “minimize cognitive burden” on soldiers, according to budget documents released last week. The $40 million-plus classified program, codenamed “CARCOSA,” shares the same name as “the temple” in the first season of the HBO TV series “True Detective,” a place where an elite pedophile ring performs ritual abuse on children.

The program is overseen by the Defense Advanced Research Projects Agency, or DARPA, the Pentagon’s premier organization funding the development of futuristic weapons and military capabilities. 

There is of course no evidence that the military’s CARCOSA is involved in anything like that; but it’s unclear why, at a time when the White House has prioritized fighting “dangerous conspiracy theories,” DARPA is providing the conspiracy crowd with such fodder. The Intercept reached out to DARPA to inquire whether the elite research agency was aware of the strange coincidence or whether there’s a “True Detective” fan at the agency. DARPA did not respond at the time of publication.

The Pentagon’s CARCOSA is its own temple of information, an AI-driven aggregator that is intended to acquire, sort, and display the blizzard of information that reflects what is going on on a fast-moving future battlefield. “The Carcosa program is developing and demonstrating cyber technologies for use by warfighters during tactical operations,” DARPA’s new fiscal year 2025 budget request says. “Carcosa cyber technology aims to provide warfighters in the field with enhanced situational awareness of their immediate battlespace.

CARCOSA, DARPA says, will help to “minimize cognitive burden on tactical cyber operators.” In other words, headaches caused by the same information overload we all have to deal with everyday. Individual cyber warriors on high-intensity battlefields such as Ukraine and Israel are inundated with data, from their own communications and IT systems, from a virtual Niagara of intelligence inputs, and from electronic attacks via computers, machines, and drones. On top of it all, the modern battlefield is a venue for “information operations,” which seek to manipulate what the enemy sees and believes.

CARCOSA will support an Army mission area called Cyberspace and Electromagnetic Activities, or CEMA, which provides battlefield commanders “with technical and tactical advice on all aspects of offensive and defensive cyberspace and electronic warfare operations.” The Army says CEMA operators are so inundated with information that they need augmented intelligence technology to help sort the signal from the noise.

CARCOSA stands for Cyber-Augmented Reality and Cyber-Operations Suite for Augmented Intelligence. “Augmented reality” refers to immersive technology that produces computer-generated images overlaying a user’s view of the real world, like Apple’s Vision Pro headset. The program supports development of various technologies, at least according to vague budget documents, all of which seek to defeat a new reality of combat: Individual soldiers and commanders can’t process all of the information that they are bombarded with. 

The full CARCOSA name, which has not been previously reported, appears in a November $26 million DARPA contract to Two Six Labs, a part of Two Six Technologies and owned by the Carlyle Group. Two Six Labs says it supplies “situational awareness interfaces for cyber operators to distributed sensor networks, from machine learning models that learn to reverse engineer malware to embedded devices that enable and protect our nation’s warfighters.” 

“We want to do everything we can to help the US government and the intelligence community,” says Two Six Technologies CEO Joe Logue. “Starting from over here for information operations and influence up through cyber, command control and operations.” In its three years of operations, the Arlington, Virginia, based company has doubled its national security contracts to some $650 million.

“DARPA’s Cyber-Augmented Operations, also known as CAOs, are a vast spectrum of military programs many of which seek to enhance, if not replace, humans with machines,” says Annie Jacobsen, author of “The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency.”

Keep reading

How generative AI will ruin science and academic research

I often come back to Neil Postman’s 1985 classic Amusing Ourselves to Death. It’s a penetrating analysis on the cognitive effects of media technology. His focus is mainly on how the format and mode of communication influences the character of content and how that content then trains us, but to a lesser extent also how the discourse and the “atmosphere” of the information exchange affects these issues.

This is almost forty years ago, and the disruptive medium under scrutiny is television. The entire approach seems almost quaint, the 80s in relation to the current period being a comparative golden age of critical thinking, complex exposition and conscious, creative citizens.

Still, the issues Postman emphasizes were already significant way back then. You can summarize his criticism with television’s format having complex, detrimental effects on the content and character of public discourse, as well as on the cognitive abilities of human beings, especially in terms of understanding complex issues and parse arguments and evidence with many layers and branching implications.

The basic issue on Postman’s view is that written exposition and complex oral tradition alike foster and prepare the human mind for the rigors of critical thinking and rational reflection. And to the contrary, that the entertainment media and discourses as exemplified by television in particular, geared towards short-term gratification and the communication of sensational experiences, have rather the opposite effect.

His examples are really glaring, even back then:

Contradiction, in short, requires that statements and events be perceived as interrelated aspects of a continuous and coherent context. Disappear the context, or fragment it, and contradiction disappears. This point is nowhere made more clear to me than in conferences with my younger students about their writing. “Look here,” I say. “In this paragraph you have said one thing. And in that you have said the opposite. Which is it to be?” They are polite, and wish to please, but they are as baffled by the question as I am by the response. “I know,” they will say, “but that is there and this is here.” The difference between us is that I assume “there” and “here,” “now” and “then,” one paragraph and the next to be connected, to be continuous, to be part of the same coherent world of thought. That is the way of typographic discourse, and typography is the universe I’m “coming from,” as they say. But they are coming from a different universe of discourse altogether: the “Now … this” world of television. The fundamental assumption of that world is not coherence but discontinuity. And in a world of discontinuities, contradiction is useless as a test of truth or merit, because contradiction does not exist. My point is that we are by now so thoroughly adjusted to the “Now … this” world of news—a world of fragments, where events stand alone, stripped of any connection to the past, or to the future, or to other events—that all assumptions of coherence have vanished. And so, perforce, has contradiction. In the context of no context, so to speak, it simply disappears. And in its absence, what possible interest could there be in a list of what the President says now and what he said then? It is merely a rehash of old news, and there is nothing interesting or entertaining in that.

– Postman, ibid.

This is not an unfamiliar situation for most of us. I even had a similar experience this morning in an attempted discussion with someone who incredibly enough stated four (!) different, mutually incompatible things within the framework of a couple of short paragraphs, and he of course still kept insisting I was wrong. His stated position contained the following propositions:

  1. Beliefs are absurd
  2. Beliefs ought to be held only due to acceptable evidence
  3. You can hold beliefs for whatever reasons you like
  4. Beliefs are by definition anchored in emotion and not acceptable evidence

All of these statements are literally incompatible with every single other one.

It’s like a convoluted version of the Liar’s Paradox. If I agree with him, I’m necessarily wrong, and if I disagree, I’m wrong too. But in contrast to the old thought experiment, where my error lies in the direct affirmation of a statement’s opposite, here there’s literally no way to make sense of how I would be wrong if I either agree with or reject his position, since it’s internally incoherent in a complex and not only binary sense.

Keep reading

Hackers can read private AI assistant chats even though they’re encrypted

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Keep reading

Artificial Intelligence In The Classroom Can Only Offer Artificial Educations

Educators are grappling with how to approach ever-evolving generative artificial intelligence — the kind that can create language, images, and audio. Programs like ChatGPT, Gemini, and Copilot pose far different challenges from the AI of yesteryear that corrected spelling or grammar. Generative AI generates whatever content it’s asked to produce, whether it’s a lab report for a biology course, a cover letter for a particular job, or an op-ed for a newspaper.

This groundbreaking development leaves educators and parents asking: Should teachers teach with or against generative AI, and why? 

Technophiles may portray skeptics as Luddites — folks of the same ilk that resisted the emergence of the pen, the calculator, or the word processor — but this technology possesses the power to produce thought and language on someone’s behalf, so it’s drastically different. In the writing classroom, specifically, it’s especially problematic because the production of thought and language is the goal of the course, not to mention the top goals of any legitimate and comprehensive education. So count me among the educators who want to proceed with caution, and that’s coming from a writing professor who typically embraces educational technology

Keep reading

Emotion-Tracking – AI on the job: Workers Fear Being Watched – and Misunderstood.

We have heard the warnings from Yuval Noah Harari, that if we don’t figure out how to regulate artificial intelligence (AI) human brains will be hacked soon,” a statement that arguably speaks to humanity’s worst fears about AI. This may be especially so, when hearing from Schwab’s advisor Harari that to “hack a human being is to get to know that person better than they know themselves,” which can enable those who own the technology to increasingly manipulate us.

We may believe this extreme threat to our privacy will occur some time in the future, but, the hacking Harari is describing is more proverbial than literal, and has already been occuring in environments like Facebook and YouTube where we are led to view content that the algorithms have deemed to be of interest to us. It now would appear that many have gradually become desensitised to it the “hacking” and manipulation allowing it to increase without too much protest.

But how would you feel if your workplace was tracking how you feel? asks Nazanin Andalibi who is Assistant Professor of Information at the University of Michigan. and in the article below dissuses emotion AI which is already being used in the workplace.

Keep reading

NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute

The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.

NIST is an agency of the US Department of Commerce whose mission is “to promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” According to the agency’s website, its core competencies are “measurement science,” “rigorous traceability” and “development and use of standards.” NIST also develops cybersecurity standards, guidelines and best practices, and released an AI security framework in January 2023.

Christiano, who is known for his ties to the effective altruism (EA) movement and its offshoot, longtermism (a view that prioritizes the long-term future of humanity, popularized by philosopher William MacAskill), was allegedly rushed through the hiring process without anyone knowing until today, one of the sources said.

The appointment of Christiano, which was said to come directly from Secretary of Commerce Gina Raimondo, has sparked outrage among NIST employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity.

However, Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told VentureBeat that President Biden’s AI Executive Order, introduced in November 2023, specifically asks NIST and the AISI to focus on certain tasks — including CBRN (chemical, biological, radiological and nuclear materials) — for which Paul Christiano is “extremely qualified.”

Many say EA — defined by the Center for Effective Altruism as an “intellectual project using evidence and reason to figure out how to benefit others as much as possible” — has turned into a cult-like group of highly influential and wealthy adherents (made famous by FTX founder and jailbird Sam Bankman-Fried) whose paramount concern revolves around preventing a future AI catastrophe from destroying humanity. Critics of the EA focus on this existential risk, or “x-risk,” say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity. 

Keep reading