Klaus Schwab Announces ‘Collaboration for the Intelligent Age’ will be Theme of Next WEF Meeting in Davos

World Economic Forum (WEF) founder Klaus Schwab announces that the theme for next year’s Annual Meeting in Davos will be “Collaboration for the Intelligent Age.”

Schwab made the announcement on the WEF Agenda blog on September 24, where he also declared, “We have already crossed the threshold into the Intelligent Age.

It is up to us to determine whether it will lead to a future of greater equality, sustainability and collaboration — or if it will deepen divides that already exist.”

Schwab may declare that we are now in the Intelligent Age, but what type of intelligence is he actually talking about?

As we shall see, this Intelligent Age is more about the dumbing of humanity and the rise of smart technologies for mass surveillance and censorship that limits our decision-making capabilities.

“The Intelligent Age is also transforming how we live. Cities are becoming smarter, with sensors and AI managing everything from traffic flow to energy usage. These smart cities, and the smart homes within them, are not just more efficient, they are designed to be more sustainable, reducing carbon emissions and improving quality of life”

Klaus Schwab, “The Intelligent Age: A time for cooperation,” September 2024

In his latest post on the WEF Agenda blog, Schwab lists several examples of how AI and automation is outperforming human capabilities.

Keep reading

AI’s Ominous Split Away From Human Thinking

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.

Remember Deepmind’s AlphaGo? It represented a fundamental breakthrough in AI development, because it was one of the first game-playing AIs that took no human instruction and read no rules.

Instead, it used a technique called self-play reinforcement learning (RL) to build up its own understanding of the game. Pure trial and error across millions, even billions of virtual games, starting out more or less randomly pulling whatever levers were available, and attempting to learn from the results.

Within two years of the start of the project in 2014, AlphaGo had beaten the European Go champion 5-0 – and by 2017 it had defeated the world’s #1 ranked human player.

At this point, Deepmind unleashed a similar AlphaZero model on the chess world, where models like Deep Blue, trained on human thinking, knowledge and rule sets, had been beating human grandmasters since the 90s. AlphaZero played 100 matches against the reigning AI champion, Stockfish, winning 28 and tying the rest.

Keep reading

Minnesota ‘Acting as a Ministry of Truth’ With Anti-Deep Fake Law, Says Lawsuit

A new lawsuit takes aim at a Minnesota law banning the “use of deep fake technology to influence an election.” The measure—enacted in 2023 and amended this year—makes it a crime to share AI-generated content if a person “knows or acts with reckless disregard about whether the item being disseminated is a deep fake” and the sharing is done without the depicted individual’s consent, intended to “injure a candidate or influence the result of an election,” and either within 90 days before a political party nominating convention or after the start of the absentee voting period prior to a presidential nomination primary, any state or local primary, or a general election.

Christopher Kohls, a content creator who goes by Mr. Reagan, and by Minnesota state Rep. Mary Franson (R–District 12B) argue that the law is an “impermissible and unreasonable restriction of protected speech.”

Violating Minnesota’s deep fake law is punishable by up to 90 days imprisonment and/or a fine of up to $1,000, with penalties increasing if the offender has a prior conviction within the past five years for the same thing or the deep fake is determined to have been shared with an “intent to cause violence or bodily harm.” The law also allows for the Minnesota attorney general, county or city attorneys, individuals depicted in the deep fake, or any candidate “who is injured or likely to be injured by dissemination” to sue for injunctive relief “against any person who is reasonably believed to be about to violate or who is in the course of violating” the law.

If a candidate for office is found guilty of violating this law, they must forfeit the nomination or office and are henceforth disqualified “from being appointed to that office or any other office for which the legislature may establish qualifications.”

There are obviously a host of constitutional problems with this measure, which defines “deep fake” very broadly: “any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof” that is realistic enough for a reasonable person to believe it depicts speech or conduct that did not occur and developed though “technical means” rather than “the ability of another individual to physically or verbally impersonate such individual.”

Keep reading

Judge blocks California deepfakes law that sparked Musk-Newsom row

A federal judge on Wednesday blocked a California measure restricting the use of digitally altered political “deepfakes” just two weeks after Gov. Gavin Newsom signed the bill into law.

The ruling is a blow to a push by the state’s leading Democrats to rein in misleading content on social media ahead of Election Day.

Chris Kohls, known as “Mr Reagan” on X, sued to prevent the state from enforcing the law after posting an AI-generated video of a Harris campaign ad on the social media site. He claimed the video was protected by the First Amendment because it was a parody.

The judge agreed.

“Most of [the law] acts as a hammer instead of a scalpel,” Senior U.S. District Judge John A. Mendez wrote, calling it “a blunt tool hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.” He carved out an exception for a “not unduly burdensome” portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.

Theodore Frank, an attorney for Kohls, said in a statement they were “gratified that the district court agreed with our analysis.”

Keep reading

NATO takes the plunge into the world of venture capital

The NATO Innovation Fund, the “world’s first multi-sovereign venture capital fund,” made its first investments earlier this summer in deep tech companies including British aerospace manufacturing company Space Forge and AI companies ARX Robotics and Fractile.

Modeled like the U.S. intelligence community’s venture capital arm IQT (In-Q-Tel), the fund’s intention is to focus on spurring innovation in areas including biotechnology, AI, space tech, and advanced communications.

As NATO Innovation Fund Board Chairs Klaus Hommels and Fiona Murray described the project’s purview in Fortune in July: “By investing in and adopting emerging dual-use technologies, NATO can leverage the private sector’s innovation power and its transatlantic talent pool, while countering our strategic competitors’ influence and ambitions.”

Keep reading

Big Tech-Government Collusion: Biden-Haris Admin, Meta, Google, and Others Launch AI Partnership to Combat “Disinformation” and “Hate Speech”

The current US White House seems to be exploring every possibility that might secure another avenue for what opponents (and quite a few lawmakers) refer to as “collusion” with (Big) Tech.

A new scheme has just been announced, that revolves around the “AI” and “disinformation” buzzwords, and includes the US State Department, Meta, Anthropic, Google, IBM, Microsoft, Nvidia, and OpenAI.

Looks like quite an “ensemble cast” – or “usual suspects” – right there.

It’s called, Partnership for Global Inclusivity on AI, and it was announced by Secretary of State Anthony Blinken along with a decision to bankroll programs “identifying disinformation using AI” with $3 million.

We obtained a copy of the report for you here.

Keep reading

FBI Seeks Options for “Clearview AI” Type Facial Recognition Searches

The FBI has put out a request for information regarding (RFI) open-source facial recognition capabilities which it hopes to acquire in the future. The technology they are desiring would function similar to what is offered through companies like Clearview AI and PimEyes, both of which have been heavily scrutinized and sued for privacy violations.

Biometric Update reports that:

The agency explains the motivation behind its RFI as relating to major criminal, counterintelligence and counterterrorism investigations: “Investigative leads are often derived from open-source images posted on social media profiles, Internet sites, and the dark web; for example, individuals often use social media to communicate, coordinate, and conduct criminal activity.”

Criminals wanted by the FBI and its partners are not always American residents with driver’s licenses, passports or criminal histories that would make their face biometrics available for searching.

Keep reading

While we’re supplied unreliable “renewable” energy, reliable “green” nuclear energy will be supplied for data centres and AI

AI’s insatiable need for power is driven by the complexity and scale of its computational requirements.

AI models are often trained and deployed in data centres, which are massive facilities housing thousands of servers. These servers consume a substantial amount of energy, equivalent to that of 30,000 homes.

AI inference, the process of answering user queries, relies heavily on Graphics Processing Units (“GPUs”). Each inference requires GPU processing power, which uses energy. This demand is expected to increase as more AI models are developed and deployed.

Larger AI models, such as those used in language processing and computer vision, require more computational resources and, consequently, more energy. These models have billions of parameters and rely on massive data sets, further straining energy demands.  And as AI adoption grows, so does the need for more powerful infrastructure to support it.

In an own goal for those eager to implement Agenda 2030 and its Sustainable Development Goals – such as those who eagerly signed the ‘Pact for the Future’, ‘Global Digital Compact’ and ‘Pact for Future Generations’ at the UN Summit of the Future on Sunday – the energy consumption of AI systems contributes to greenhouse gas emissions, so they say, and strains global grids. 

In Bloomberg’s podcast below (audio only) published on Tuesday, host David Gura and Bloomberg reporter Josh Saul discuss just what the insatiable AI data centre power needs mean for local communities, energy prices and efforts to switch to “renewables” to combat climate change.

“The big tech companies have set ambitious impressive clean energy goals … almost to a company [they] have said, ‘By 2030 we’re going to be 100% clean energy for our data centres’. But it’s a difficult thing to do because data centres are on all the time, 100% of the time, so to match that to clean energy is hard,” Saul said.

We guess matching large energy requirements 100% of the time is hard to do with “renewable” energy, i.e. wind and solar, because it is unreliable and intermittent.

However, the solution is simple; label nuclear energy as “green” energy. Gura posed the question: “Do they, the tech companies, believe that kind of traditional green energy, do they think that green energy is going to be enough to make up the difference that they need?”

Saul responded, “Well everybody loves nuclear I mean nuclear has gotten so hot, like Joe Rogan talking about nuclear.”

Gura chipped in, “Bill Gates is talking about it now.”

“Everybody. Yeah, everyone’s very excited about nuclear,” Saul said.

Keep reading

‘Knife-wielding orca’ and alien-looking figures among 300 Nazca Lines discovered in groundbreaking AI study

Scientists have discovered more than 300 never-before-seen Nazca Lines in Peru — including alien-looking humanoid figures, decapitated heads, potential historic ceremonies and a surprisingly well-armed orca.

The staggering new haul was unearthed in just six months with the help of artificial intelligence (AI) and almost doubles the number of known geoglyphs in the region.

The Nazca Lines are a group of large human-carved geoglyphs located in a roughly 170-square-mile (440 square kilometers) area of Peru’s Nazca Desert. The ancient artworks were likely created between 200 B.C. to A.D. 500 by members of the pre-Incan civilization, known as the Nazca (or Nasca), who removed the upper layers of the desert’s red-tinged surface pebbles to reveal sections of lighter soil in a wide range of different shapes and sizes.

Researchers had already found around 430 Nazca Lines since the mysterious shapes were rediscovered by airplane passengers in the 1920s. Most of these geoglyphs were identified in the last 20 years thanks to advancements in satellite imagery. However, the rate at which new lines are being discovered has started to slow, and researchers suspect that any remaining shapes are too faint to be easily spotted by the human eye.

Keep reading

Gates proposes using AI to stifle free speech; the UN is aiming to be in control of AI, globally

Bill Gates wants to use artificial intelligence (“AI)” for real-time censorship of vaccine-related “misinformation,” sparking a heated debate about free speech rights, mind control and the rewriting of history by the so-called “elite.”

According to Bill Gates, online misinformation is the No. 1 unsolvable problem facing today’s young people.   Gates spends a lot of his time and money pushing global climate change mitigation measures and vaccines but, as CNBC noted, one problem has him stumped –  what he refers to as “misinformation.”

Recently, Gates proposed AI for real-time censorship of what he deems “vaccine misinformation,” arguing that while free speech is important, it should have limitations, especially when it incites violence or discourages vaccination.

Gates, a prominent advocate for vaccines through his “philanthropic” organisations, believes that those who advise against vaccines are “inciting violence” and hindering public health. He envisions AI as a tool to establish and enforce speech “boundaries” to control the narrative surrounding vaccines.

Keep reading