NATO takes the plunge into the world of venture capital

The NATO Innovation Fund, the “world’s first multi-sovereign venture capital fund,” made its first investments earlier this summer in deep tech companies including British aerospace manufacturing company Space Forge and AI companies ARX Robotics and Fractile.

Modeled like the U.S. intelligence community’s venture capital arm IQT (In-Q-Tel), the fund’s intention is to focus on spurring innovation in areas including biotechnology, AI, space tech, and advanced communications.

As NATO Innovation Fund Board Chairs Klaus Hommels and Fiona Murray described the project’s purview in Fortune in July: “By investing in and adopting emerging dual-use technologies, NATO can leverage the private sector’s innovation power and its transatlantic talent pool, while countering our strategic competitors’ influence and ambitions.”

Keep reading

Big Tech-Government Collusion: Biden-Haris Admin, Meta, Google, and Others Launch AI Partnership to Combat “Disinformation” and “Hate Speech”

The current US White House seems to be exploring every possibility that might secure another avenue for what opponents (and quite a few lawmakers) refer to as “collusion” with (Big) Tech.

A new scheme has just been announced, that revolves around the “AI” and “disinformation” buzzwords, and includes the US State Department, Meta, Anthropic, Google, IBM, Microsoft, Nvidia, and OpenAI.

Looks like quite an “ensemble cast” – or “usual suspects” – right there.

It’s called, Partnership for Global Inclusivity on AI, and it was announced by Secretary of State Anthony Blinken along with a decision to bankroll programs “identifying disinformation using AI” with $3 million.

We obtained a copy of the report for you here.

Keep reading

FBI Seeks Options for “Clearview AI” Type Facial Recognition Searches

The FBI has put out a request for information regarding (RFI) open-source facial recognition capabilities which it hopes to acquire in the future. The technology they are desiring would function similar to what is offered through companies like Clearview AI and PimEyes, both of which have been heavily scrutinized and sued for privacy violations.

Biometric Update reports that:

The agency explains the motivation behind its RFI as relating to major criminal, counterintelligence and counterterrorism investigations: “Investigative leads are often derived from open-source images posted on social media profiles, Internet sites, and the dark web; for example, individuals often use social media to communicate, coordinate, and conduct criminal activity.”

Criminals wanted by the FBI and its partners are not always American residents with driver’s licenses, passports or criminal histories that would make their face biometrics available for searching.

Keep reading

While we’re supplied unreliable “renewable” energy, reliable “green” nuclear energy will be supplied for data centres and AI

AI’s insatiable need for power is driven by the complexity and scale of its computational requirements.

AI models are often trained and deployed in data centres, which are massive facilities housing thousands of servers. These servers consume a substantial amount of energy, equivalent to that of 30,000 homes.

AI inference, the process of answering user queries, relies heavily on Graphics Processing Units (“GPUs”). Each inference requires GPU processing power, which uses energy. This demand is expected to increase as more AI models are developed and deployed.

Larger AI models, such as those used in language processing and computer vision, require more computational resources and, consequently, more energy. These models have billions of parameters and rely on massive data sets, further straining energy demands.  And as AI adoption grows, so does the need for more powerful infrastructure to support it.

In an own goal for those eager to implement Agenda 2030 and its Sustainable Development Goals – such as those who eagerly signed the ‘Pact for the Future’, ‘Global Digital Compact’ and ‘Pact for Future Generations’ at the UN Summit of the Future on Sunday – the energy consumption of AI systems contributes to greenhouse gas emissions, so they say, and strains global grids. 

In Bloomberg’s podcast below (audio only) published on Tuesday, host David Gura and Bloomberg reporter Josh Saul discuss just what the insatiable AI data centre power needs mean for local communities, energy prices and efforts to switch to “renewables” to combat climate change.

“The big tech companies have set ambitious impressive clean energy goals … almost to a company [they] have said, ‘By 2030 we’re going to be 100% clean energy for our data centres’. But it’s a difficult thing to do because data centres are on all the time, 100% of the time, so to match that to clean energy is hard,” Saul said.

We guess matching large energy requirements 100% of the time is hard to do with “renewable” energy, i.e. wind and solar, because it is unreliable and intermittent.

However, the solution is simple; label nuclear energy as “green” energy. Gura posed the question: “Do they, the tech companies, believe that kind of traditional green energy, do they think that green energy is going to be enough to make up the difference that they need?”

Saul responded, “Well everybody loves nuclear I mean nuclear has gotten so hot, like Joe Rogan talking about nuclear.”

Gura chipped in, “Bill Gates is talking about it now.”

“Everybody. Yeah, everyone’s very excited about nuclear,” Saul said.

Keep reading

‘Knife-wielding orca’ and alien-looking figures among 300 Nazca Lines discovered in groundbreaking AI study

Scientists have discovered more than 300 never-before-seen Nazca Lines in Peru — including alien-looking humanoid figures, decapitated heads, potential historic ceremonies and a surprisingly well-armed orca.

The staggering new haul was unearthed in just six months with the help of artificial intelligence (AI) and almost doubles the number of known geoglyphs in the region.

The Nazca Lines are a group of large human-carved geoglyphs located in a roughly 170-square-mile (440 square kilometers) area of Peru’s Nazca Desert. The ancient artworks were likely created between 200 B.C. to A.D. 500 by members of the pre-Incan civilization, known as the Nazca (or Nasca), who removed the upper layers of the desert’s red-tinged surface pebbles to reveal sections of lighter soil in a wide range of different shapes and sizes.

Researchers had already found around 430 Nazca Lines since the mysterious shapes were rediscovered by airplane passengers in the 1920s. Most of these geoglyphs were identified in the last 20 years thanks to advancements in satellite imagery. However, the rate at which new lines are being discovered has started to slow, and researchers suspect that any remaining shapes are too faint to be easily spotted by the human eye.

Keep reading

Gates proposes using AI to stifle free speech; the UN is aiming to be in control of AI, globally

Bill Gates wants to use artificial intelligence (“AI)” for real-time censorship of vaccine-related “misinformation,” sparking a heated debate about free speech rights, mind control and the rewriting of history by the so-called “elite.”

According to Bill Gates, online misinformation is the No. 1 unsolvable problem facing today’s young people.   Gates spends a lot of his time and money pushing global climate change mitigation measures and vaccines but, as CNBC noted, one problem has him stumped –  what he refers to as “misinformation.”

Recently, Gates proposed AI for real-time censorship of what he deems “vaccine misinformation,” arguing that while free speech is important, it should have limitations, especially when it incites violence or discourages vaccination.

Gates, a prominent advocate for vaccines through his “philanthropic” organisations, believes that those who advise against vaccines are “inciting violence” and hindering public health. He envisions AI as a tool to establish and enforce speech “boundaries” to control the narrative surrounding vaccines.

Keep reading

LinkedIn is training its AI using your posts and articles without your permission

LinkedIn has been using user-generated content, including posts and articles, to train its generative AI models without obtaining consent from users.

The platform updated its privacy policy and FAQ section to reflect this practice, indicating that data collection for AI training had already begun before the announcement.

Users can opt out of having their data used for AI training by toggling a setting called “Data for Generative AI Improvement” under “Data Privacy” in their settings.

LinkedIn has quietly implemented a new setting that automatically opts users into contributing their personal data, including posts, towards the training of generative AI models. This means that users’ posts are being harvested for AI training without their explicit consent.

LinkedIn’s generative AI models are used for features like writing assistants, and the scraped data will be used to train these models. The company claims to employ privacy-enhancing technologies to anonymise or redact personal data from its AI training sets.  However, considering what Larry Ellison,  co-founder of Oracle and Chairman of the Board and Chief Technology Officer, said last week about data captured by surveillance cameras we shouldn’t take LinkedIn’s word for it.

During an investor Q&A session at the ‘Oracle Financial Analyst Meeting 2024’ event, Ellison said: “The police … body cameras … our [Oracle’s] body cameras are simply two lenses attached to a vest [and] attached to the smartphone that you’re wearing … the camera is always on, you don’t turn it on and off, you can’t turn it off to go to the bathroom – ‘Oracle, I need two minutes to take a bathroom break,” then we’ll turn it off.  The truth is, we don’t really turn it off.  What we do is, we record it so no one can see it [so] that no one can get into that recording without a court order.” (see timestamp 1:08:27 HERE.)

Keep reading

New “thinking” AI chatbot capable of terrorizing humans, stealing cash from “huge numbers” of people

Move out of the way, Nigerian princes, because there is a new cash-stealing scam that is sweeping the globe: artificial intelligence (AI) chatbots that are capable of “reasoning” and “thinking” up endless ways to cheat people out of their money.

OpenAI recently showed off its new o1 ChatGPT model that the company says is much “smarter” than existing AI chatbots. The o1 model has the ability “to spend more time thinking before they respond,” the company revealed.

“They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

OpenAI’s o1 model of ChatGPT is the first major advancement to the system since it was first launched in late 2022. Currently, it is only available for paying ChatGPT members.

According to cybersecurity expert Dr. Andrew Bolster, the o1 ChatGPT AI model is a dream come true for cyber-criminals who are sure to dream up all kinds of scams that even the savviest internet users will be unable to detect before it bilks them out of their hard-earned cash.

“Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” Dr. Bolster says.

“Where this generation of LLM’s excel is in how they go about appearing to ‘reason.’ Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses.”

Keep reading

Constellation Energy to restart Three Mile Island nuclear plant, sell the power to Microsoft for AI

Constellation Energy plans to restart the Three Mile Island nuclear plant and will sell the power to Microsoft, demonstrating the immense energy needs of the tech sector as they build out data centers to support artificial intelligence.

Constellation expects the Unit 1 reactor at Three Mile Island near Middletown, Pennsylvania, to come back online in 2028, subject to approval by the Nuclear Regulatory Commission, the company announced Friday. Constellation also plans to apply to extend the plant’s operations to at least 2054.

Constellation stock jumped about 15% in morning trading. Its shares have more than doubled year to date.

Microsoft will purchase electricity from the plant in a 20-year agreement to match the energy its data centers consume with carbon-free power. Constellation described the agreement with Microsoft as the largest power purchase agreement that the nuclear plant operator has ever signed.

“The decision here is the most powerful symbol of the rebirth of nuclear power as a clean and reliable energy resource,” Constellation CEO Joe Dominguez told investors on a call Friday morning.

Unit 1 ceased operations in 2019 as nuclear power struggled to compete economically with cheap natural gas and renewables. It is separate from the reactor that partially melted down in 1979 in the worst nuclear accident in U.S. history.

Constellation will rename the plant the Crane Clean Energy Center. The facility is named after Chris Crane, who was CEO of Constellation’s former parent company and died in April.

Constellation will invest $1.6 billion in restarting the plant through 2028, including on nuclear fuel, Chief Financial Officer Dan Eggers told investors during the call.

Keep reading

Gavin Newsom Just Signed an AI-Deepfake Ban That Could Impact Free Speech. This Is the First Lawsuit Against It.

We were waiting for California Governor Gavin Newsom to sign on the dotted line of a new controversial bill that limits speech around AI “deepfake” content that many believe could impact memes. Now that Newsom has signed the bill, the first of several lawsuits against the censorial bills can be filed.

Christopher Kohls, an online content creator known for his political satire, has filed a lawsuit against the state, challenging recent legislation aimed at regulating AI-generated content. The lawsuit seeks to strike down two California laws which Kohls claims infringe upon First Amendment rights.

Read the lawsuit here.

The controversy began when Kohls posted a satirical video parodying Vice President Kamala Harris, which quickly caught the attention of California Governor Gavin Newsom. Following the video’s viral spread, largely due to a share by Elon Musk, Governor Newsom criticized the content, leading to the swift passage of AB 2655, the “Defending Democracy from Deepfake Deception Act of 2024,” and AB 2839, related to “Elections: deceptive media in advertisements.”

Keep reading