Big Tech-Government Collusion: Biden-Haris Admin, Meta, Google, and Others Launch AI Partnership to Combat “Disinformation” and “Hate Speech”

The current US White House seems to be exploring every possibility that might secure another avenue for what opponents (and quite a few lawmakers) refer to as “collusion” with (Big) Tech.

A new scheme has just been announced, that revolves around the “AI” and “disinformation” buzzwords, and includes the US State Department, Meta, Anthropic, Google, IBM, Microsoft, Nvidia, and OpenAI.

Looks like quite an “ensemble cast” – or “usual suspects” – right there.

It’s called, Partnership for Global Inclusivity on AI, and it was announced by Secretary of State Anthony Blinken along with a decision to bankroll programs “identifying disinformation using AI” with $3 million.

We obtained a copy of the report for you here.

Keep reading

IBM opens its quantum-computing stack to third parties

As we described earlier this year, operating a quantum computer will require a significant investment in classical computing resources, given the amount of measurements and control operations that need to be executed and interpreted. That means that operating a quantum computer will also require a software stack to control and interpret the flow of information from the quantum side.

But software also gets involved well before anything gets executed. While it’s possible to execute algorithms on quantum hardware by defining the full set of commands sent to the hardware, most users are going to want to focus on algorithm development, rather than the details of controlling any single piece of quantum hardware. “If everyone’s got to get down and know what the noise is, [use] performance management tools, they’ve got to know how to compile a quantum circuit through hardware, you’ve got to become an expert in too much to be able to do the algorithm discovery,” said IBM’s Jay Gambetta. So, part of the software stack that companies are developing to control their quantum hardware includes software that converts abstract representations of quantum algorithms into the series of commands needed to execute them.

IBM’s version of this software is called Qiskit (although it was made open source and has since been adopted by other companies). Recently, IBM made a couple of announcements regarding Qiskit, both benchmarking it in comparison to other software stacks and opening it up to third-party modules. We’ll take a look at what software stacks do before getting into the details of what’s new.

Keep reading

FBI Seeks Options for “Clearview AI” Type Facial Recognition Searches

The FBI has put out a request for information regarding (RFI) open-source facial recognition capabilities which it hopes to acquire in the future. The technology they are desiring would function similar to what is offered through companies like Clearview AI and PimEyes, both of which have been heavily scrutinized and sued for privacy violations.

Biometric Update reports that:

The agency explains the motivation behind its RFI as relating to major criminal, counterintelligence and counterterrorism investigations: “Investigative leads are often derived from open-source images posted on social media profiles, Internet sites, and the dark web; for example, individuals often use social media to communicate, coordinate, and conduct criminal activity.”

Criminals wanted by the FBI and its partners are not always American residents with driver’s licenses, passports or criminal histories that would make their face biometrics available for searching.

Keep reading

Isle of Man explores facial recognition at entry and exit points and mandatory ID for all residents

The government plans to introduce FRT at air and sea ports, utilising automated number plate recognition (“ANPR”) to secure borders and detect criminals before they enter the island.  In addition, the government is considering a mandatory ID registration system for all island residents, not just migrants.

The move comes in response to the growing threat of organised crime including drug trafficking, weapons, modern slavery, immigration abuses and other criminal activities being facilitated through the island’s ports. The ‘Securing Our Island’ strategy aims to address these issues by leveraging modern technologies and strengthening border security.

In a video interview, Jane Poole-Wilson Member of the House of Keys (“MHK”) and Minister for Justice and Home Affairs said that in terms of facial recognition, it will be automated to quickly identify people against a known watchlist of criminals.

Keep reading

While we’re supplied unreliable “renewable” energy, reliable “green” nuclear energy will be supplied for data centres and AI

AI’s insatiable need for power is driven by the complexity and scale of its computational requirements.

AI models are often trained and deployed in data centres, which are massive facilities housing thousands of servers. These servers consume a substantial amount of energy, equivalent to that of 30,000 homes.

AI inference, the process of answering user queries, relies heavily on Graphics Processing Units (“GPUs”). Each inference requires GPU processing power, which uses energy. This demand is expected to increase as more AI models are developed and deployed.

Larger AI models, such as those used in language processing and computer vision, require more computational resources and, consequently, more energy. These models have billions of parameters and rely on massive data sets, further straining energy demands.  And as AI adoption grows, so does the need for more powerful infrastructure to support it.

In an own goal for those eager to implement Agenda 2030 and its Sustainable Development Goals – such as those who eagerly signed the ‘Pact for the Future’, ‘Global Digital Compact’ and ‘Pact for Future Generations’ at the UN Summit of the Future on Sunday – the energy consumption of AI systems contributes to greenhouse gas emissions, so they say, and strains global grids. 

In Bloomberg’s podcast below (audio only) published on Tuesday, host David Gura and Bloomberg reporter Josh Saul discuss just what the insatiable AI data centre power needs mean for local communities, energy prices and efforts to switch to “renewables” to combat climate change.

“The big tech companies have set ambitious impressive clean energy goals … almost to a company [they] have said, ‘By 2030 we’re going to be 100% clean energy for our data centres’. But it’s a difficult thing to do because data centres are on all the time, 100% of the time, so to match that to clean energy is hard,” Saul said.

We guess matching large energy requirements 100% of the time is hard to do with “renewable” energy, i.e. wind and solar, because it is unreliable and intermittent.

However, the solution is simple; label nuclear energy as “green” energy. Gura posed the question: “Do they, the tech companies, believe that kind of traditional green energy, do they think that green energy is going to be enough to make up the difference that they need?”

Saul responded, “Well everybody loves nuclear I mean nuclear has gotten so hot, like Joe Rogan talking about nuclear.”

Gura chipped in, “Bill Gates is talking about it now.”

“Everybody. Yeah, everyone’s very excited about nuclear,” Saul said.

Keep reading

‘Knife-wielding orca’ and alien-looking figures among 300 Nazca Lines discovered in groundbreaking AI study

Scientists have discovered more than 300 never-before-seen Nazca Lines in Peru — including alien-looking humanoid figures, decapitated heads, potential historic ceremonies and a surprisingly well-armed orca.

The staggering new haul was unearthed in just six months with the help of artificial intelligence (AI) and almost doubles the number of known geoglyphs in the region.

The Nazca Lines are a group of large human-carved geoglyphs located in a roughly 170-square-mile (440 square kilometers) area of Peru’s Nazca Desert. The ancient artworks were likely created between 200 B.C. to A.D. 500 by members of the pre-Incan civilization, known as the Nazca (or Nasca), who removed the upper layers of the desert’s red-tinged surface pebbles to reveal sections of lighter soil in a wide range of different shapes and sizes.

Researchers had already found around 430 Nazca Lines since the mysterious shapes were rediscovered by airplane passengers in the 1920s. Most of these geoglyphs were identified in the last 20 years thanks to advancements in satellite imagery. However, the rate at which new lines are being discovered has started to slow, and researchers suspect that any remaining shapes are too faint to be easily spotted by the human eye.

Keep reading

Hardy, Radiation-Resistant Organism Could Rewrite Your Genetic Code to Cure High Cholesterol

Scientists are programming one of the world’s hardiest, most radiation-resistant organisms to rewrite a specific gene, allowing them to cure a common type of inherited high cholesterol. Dubbed TnpB and originating from the bacterium Deinococcus radiodurans, this exceptionally robust microbe also survives cold, dehydration, vacuum, and acid, making it an ideal tool for genetic editing.

Although the team has only tested its “genetic scissors” on mice models with an inherited predisposition to a type of high cholesterol called hypercholesterolemia, which currently affects 31 million Americans, the researchers believe their approach will one day allow them to cure high cholesterol in humans by essentially rewiring their genetic code.

Reprogramming TnpB to Cure High Cholesterol

In the published study outlining the new genetic reprogramming approach, the researchers note that genetic editing has shown significant promise in editing certain inherited health conditions by essentially “reprogramming” specialized bacteria to genetically edit the faulty gene in a person’s genetic code with a properly functioning one. However, the process, made famous by the CRISPR gene editing tool, has resulted in mixed successes.

One of the primary limiting factors of the CRISPR-Cas organism most commonly used in genetic editing is its size. According to the study authors, the microbe is too large to be precisely targeted, which “creates challenges when trying to deliver them to the right cells in the body.”

More recently, researchers in genetic editing have begun to focus on the organism’s “evolutionary progenitors,” some of which are much smaller than the CRISPR-Cas microbe. Among the most promising is TnpB, whose smaller size and hardiness offer scientists a new path for genetic editing.

These smaller progenitors are less efficient at reprogramming and show limited targeting ability due to their limited recognition requirements when binding DNA than the larger CRISPR-Cas microbes. Now, the researchers behind this study say they may have finally overcome that limitation, resulting in a much more efficient method of targeting TnpB to cure high cholesterol.

Keep reading

Gates proposes using AI to stifle free speech; the UN is aiming to be in control of AI, globally

Bill Gates wants to use artificial intelligence (“AI)” for real-time censorship of vaccine-related “misinformation,” sparking a heated debate about free speech rights, mind control and the rewriting of history by the so-called “elite.”

According to Bill Gates, online misinformation is the No. 1 unsolvable problem facing today’s young people.   Gates spends a lot of his time and money pushing global climate change mitigation measures and vaccines but, as CNBC noted, one problem has him stumped –  what he refers to as “misinformation.”

Recently, Gates proposed AI for real-time censorship of what he deems “vaccine misinformation,” arguing that while free speech is important, it should have limitations, especially when it incites violence or discourages vaccination.

Gates, a prominent advocate for vaccines through his “philanthropic” organisations, believes that those who advise against vaccines are “inciting violence” and hindering public health. He envisions AI as a tool to establish and enforce speech “boundaries” to control the narrative surrounding vaccines.

Keep reading

LinkedIn is training its AI using your posts and articles without your permission

LinkedIn has been using user-generated content, including posts and articles, to train its generative AI models without obtaining consent from users.

The platform updated its privacy policy and FAQ section to reflect this practice, indicating that data collection for AI training had already begun before the announcement.

Users can opt out of having their data used for AI training by toggling a setting called “Data for Generative AI Improvement” under “Data Privacy” in their settings.

LinkedIn has quietly implemented a new setting that automatically opts users into contributing their personal data, including posts, towards the training of generative AI models. This means that users’ posts are being harvested for AI training without their explicit consent.

LinkedIn’s generative AI models are used for features like writing assistants, and the scraped data will be used to train these models. The company claims to employ privacy-enhancing technologies to anonymise or redact personal data from its AI training sets.  However, considering what Larry Ellison,  co-founder of Oracle and Chairman of the Board and Chief Technology Officer, said last week about data captured by surveillance cameras we shouldn’t take LinkedIn’s word for it.

During an investor Q&A session at the ‘Oracle Financial Analyst Meeting 2024’ event, Ellison said: “The police … body cameras … our [Oracle’s] body cameras are simply two lenses attached to a vest [and] attached to the smartphone that you’re wearing … the camera is always on, you don’t turn it on and off, you can’t turn it off to go to the bathroom – ‘Oracle, I need two minutes to take a bathroom break,” then we’ll turn it off.  The truth is, we don’t really turn it off.  What we do is, we record it so no one can see it [so] that no one can get into that recording without a court order.” (see timestamp 1:08:27 HERE.)

Keep reading

New “thinking” AI chatbot capable of terrorizing humans, stealing cash from “huge numbers” of people

Move out of the way, Nigerian princes, because there is a new cash-stealing scam that is sweeping the globe: artificial intelligence (AI) chatbots that are capable of “reasoning” and “thinking” up endless ways to cheat people out of their money.

OpenAI recently showed off its new o1 ChatGPT model that the company says is much “smarter” than existing AI chatbots. The o1 model has the ability “to spend more time thinking before they respond,” the company revealed.

“They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

OpenAI’s o1 model of ChatGPT is the first major advancement to the system since it was first launched in late 2022. Currently, it is only available for paying ChatGPT members.

According to cybersecurity expert Dr. Andrew Bolster, the o1 ChatGPT AI model is a dream come true for cyber-criminals who are sure to dream up all kinds of scams that even the savviest internet users will be unable to detect before it bilks them out of their hard-earned cash.

“Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” Dr. Bolster says.

“Where this generation of LLM’s excel is in how they go about appearing to ‘reason.’ Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses.”

Keep reading