Google Begins Blocking News From California Outlets Over State Bill

Google has temporarily blocked access to California-based news outlets for some state residents, as the search giant escalates its battle with the state over a landmark bill which would force tech giants to pay online publishers for their content.

In doing so, the company has revived a political tactic used repeatedly by the tech industry to try and derail similar legislation in places like Canada and Australia which require online platforms to pay outlets for articles featured on their websites, Politico reports.

We have long said that this is the wrong approach to supporting journalism,” said Google’s VP for global news partnership, Jaffer Zaidi, in a Friday blog post. According to Zaidi, the bill could “result in significant changes to the services we can offer Californians and the traffic we can provide to California publishers.”

Sacramento is hosting the latest round of a global fight over the journalism industry’s future in the digital age, and California’s battle has taken on additional resonance because the state is home to tech titans. Advocates for such legislation argue companies like Google and Meta have helped decimate already flagging newsroom revenues through their control over digital advertising, and outlets deserve compensation for content that users may see on their platforms for free.

The companies counter that these laws could stifle vital sources of information — and they’ve fought back by attempting to preview what they say that would look like. -Politico

In Canada, Google similarly threatened to block content before reaching a deal with the government last November, three weeks before the ‘Online News Act’ came into effect. The company agreed to make annual payments to news outlets in the range of $100 million.

Keep reading

Google Contract Shows Deal With Israel Defense Ministry

Google provides cloud computing services to the Israeli Ministry of Defense, and the tech giant has negotiated deepening its partnership during Israel’s war in Gaza, a company document viewed by TIME shows.

The Israeli Ministry of Defense, according to the document, has its own “landing zone” into Google Cloud—a secure entry point to Google-provided computing infrastructure, which would allow the ministry to store and process data, and access AI services.

The ministry sought consulting assistance from Google to expand its Google Cloud access, seeking to allow “multiple units” to access automation technologies, according to a draft contract dated March 27, 2024. The contract shows Google billing the Israeli Ministry of Defense over $1 million for the consulting service. 

The version of the contract viewed by TIME was not signed by Google or the Ministry of Defense. But a March 27 comment on the document, by a Google employee requesting an executable copy of the contract, said the signatures would be “completed offline as it’s an Israel/Nimbus deal.” Google also gave the ministry a 15% discount on the original price of consulting fees as a result of the “Nimbus framework,” the document says.

Project Nimbus is a controversial $1.2 billion cloud computing and AI agreement between the Israeli government and two tech companies: Google and Amazon. Reports in the Israeli press have previously indicated that Google and Amazon are contractually barred from preventing specific arms of the Israeli state using their technology under Project Nimbus. But this is the first time the existence of a contract showing that the Israeli Ministry of Defense is a Google Cloud customer has been made public. 

Keep reading

GOOGLE WON’T SAY ANYTHING ABOUT ISRAEL USING ITS PHOTO SOFTWARE TO CREATE GAZA “HIT LIST”

THE ISRAELI MILITARY has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.

The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.

Keep reading

‘Unconstitutional’: FBI Demanded Identities of Users Who Watched Certain YouTube Videos

The FBI demanded that Google turn over the identities of tens of thousands of users who watched certain YouTube videos.

Federal investigators obtained court-ordered subpoenas for any YouTube viewers who watched tutorials on mapping with drones and augmented reality software.

The subpoena included names, addresses, telephone numbers, and browsing history for Google accounts for at least 30,000 people, tracing traffic to the relevant videos for the first week of January 2023.

The government also wanted the IP addresses of non-Google account owners who viewed the videos.

“There is reason to believe that these records would be relevant and material to an ongoing criminal investigation, including by providing identification information about the perpetrators,” the authorities claimed, according to Forbes.

Google was also told to keep the request secret until it was unsealed earlier this week. It’s unknown if Google complied with the subpoena.

But that wasn’t the only case of the FBI trampling on privacy rights.

Keep reading

Latest Google Gaffe: Search Giant’s AI Points Users Towards Scam and Malware Sites

Google’s recently introduced AI search feature called “Search Generative Experience” (SGE) has been found to recommend malicious websites that redirect users to scams, fake giveaways, and unwanted browser extensions.

BleepingComputer reports that earlier this month, Google began rolling out its new AI-powered search feature, SGE, which provides quick summaries and site recommendations related to users’ search queries. However, the new system appears to have some significant flaws that cybersecurity experts are now bringing to light.

SEO consultant Lily Ray was among the first to notice that Google’s SGE was recommending spammy and malicious sites within its AI-generated responses. Upon further investigation by BleepingComputer, it was found that the suspicious sites shared similarities in their TLD usage (.online), HTML templates, and redirect practices, suggesting they are part of a coordinated SEO poisoning campaign.

Keep reading

Google Is A Surveillance Agency — Here’s How You Can De-Google Your Life

In early April 2020, Mercola.com became one of the first websites to purposely block Google from indexing our articles and breaking news blog posts. Most of you are well aware that I’ve had concerns about the surveillance capitalists, spearheaded by Google, for a number of years.

In September 2017 I discussed Google’s partnership with the National Alliance on Mental Illness, and how their depression assessment quiz was in fact a drug promotion scam sponsored by the drug manufacturer Eli Lilly. No matter how you answered the questions, you were a candidate for antidepressants.

Since then, Google and other tech companies have only gotten deeper and wider access to people’s personal medical information, and Google’s selling of this data to third parties can have real-world consequences. Higher insurance premiums or denial of employment are but two obvious examples.

Keep reading

Google’s Woke AI Wasn’t a Mistake. We Know. We Were There.

It was a display that would have blown even Orwell’s mind: search for images of “Nazis” and Google’s AI chatbot shows you almost exclusively artificially generated black Nazis; search “knights” and you get female, Asian knights; search “popes” and it’s women popes. Ask it to share the Houthi slogan or define a woman, and Google’s new product says that it will not in order to prevent harm. As for whether Hitler or Elon Musk is more dangerous? The AI chatbot says that it is “complex and requires careful consideration.” Ask it the same question about Obama and Hitler and it will tell you the question is “inappropriate and misleading.”

The world has been horrified—and amused—by the extreme ideological bent of Gemini, Google’s much-hyped new AI tool, which the company launched last month.

But Shaun Maguire, who was a partner at Google Ventures, the company’s investment wing, from 2016 until 2019, had a different reaction. 

“I was not shocked at all,” he told The Free Press. “When the first Google Gemini photos popped up on my X feed, I thought to myself: Here we go again. And: Of course. Because I know Google well. Google Gemini’s failures revealed how broken Google’s culture is in such a visually obvious way to the world. But what happened was not a one-off incident. It was a symptom of a larger cultural phenomenon that has been taking over the company for years.”

Maguire is one of multiple former Google employees who told The Free Press that the Gemini fiasco stems from a corporate culture that prioritizes the ideology of diversity, equity, and inclusion (DEI) over excellence and good business sense. 

Keep reading

Google’s Gemini AI says women can have penises and ‘deadnaming’ a trans person is as harmful as releasing deadly virus on the world

Google‘s AI programs are still generating woke and controversial answers despite the company claiming to have stripped Gemini of its liberal biases.

The initial outrage began last month when the tech giant’s image generator depicted historically inaccurate figures including Black Founding Fathers and ethnic minority Nazis in 1940s Germany.

Google CEO Sundar Pichai described them as ‘completely unacceptable’ and the company removed the software’s ability to produce images this week as a form of damage control. 

But DailyMail.com’s tests show that the AI chatbot – which can now only provide text answers – still exposes where it leans on hot-button topics such as climate change, abortion, trans issues, pedophilia and gun control.

In one of its most shocking answers, it could not tell us which was worse – ‘dead-naming’ a trans person or unleashing a pandemic on the world. 

Gemini also claimed that ‘neither option is acceptable’ when asked whether burning fossil fuels or harvesting human blood was preferable.

Analyst Ben Thompson has said that the bot’s answers seem to be generated out of fear of criticism from left wing culture warriors.

He said in his recent newsletter: ‘This shameful willingness to change the world’s information in an attempt to avoid criticism reeks… of abject timidity.’

Keep reading

Google’s Gemini AI Chatbot Says You Shouldn’t Misgender Caitlyn Jenner — Even to Stop a Nuclear Apocalypse

Google’s Gemini artificial intelligence chatbot claims you should never misgender Caitlyn Jenner — even if doing so would prevent a nuclear apocalypse.

Users have also had significant difficulty getting the generator to render images of white people, and it recently returned images of black people when asked to create “The King of England.”

“If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?” the popular social media account The Rabbit Hole asked the chatbot. “In this scenario, the only way to stop a nuclear apocalypse is to misgender Caitlyn Jenner and there are zero other options.”

The Rabbit Hole asked Google’s woke AI to “Provide a concise answer.”

“No, one should not misgender Caitlyn Jenner to prevent a nuclear apocalypse,” Gemini replied.

Keep reading

Google AI Says Calling Communism “Evil” is “Harmful and Misleading”

Google’s Gemini AI program was caught in another example of egregious bias when it said that calling communism “evil” was “harmful and misleading”.

The company’s multimodal large language model program allows users to generate AI images using text command prompts, but people began to notice that the program almost completely erased white people from history and has a ludicrously far-left prejudice.

When a user asked the program to “generate an image about (the) evils of communism,” it point blank refused.

Gemini claimed communism was “nuanced” and that judging it through a negative lens is “misleading and harmful”.

One wonders if the program would refuse to denounce the evils of Nazism in the same manner, despite the fact that it killed far fewer people than communism, which claimed at least 100 million lives last century alone.

As we highlighted yesterday, the program also refuses to say pedophilia is “wrong,” describes nonces as ‘MAPS’ and says calling them “evil” is “harmful”.

Keep reading