LIE DETECTOR FIRM LOBBIES CIA, DOD ON AUTOMATED EYE-SCANNING TECH

April 7 2023, 10:20 a.m.

A UTAH-BASED OUTFIT overseen by a former CIA consultant has spent hundreds of thousands of dollars lobbying intelligence and defense agencies, including the CIA and DHS, to adopt its automated lie detection technology, public lobbying disclosures reviewed by The Intercept show. Converus, Inc., boasts on its website that its technology has already been used for job screenings at American law enforcement agencies, corporate compliance and loss prevention in Latin America, and document verification in Ukraine. The company’s management team includes chief scientist John Kircher, a former consultant for the CIA and Department of Defense; Todd Mickelson, former director of product management at Ancestry.com; and Russ Warner, former CEO of the content moderation firm ContentWatch.

Warner told The Intercept that lobbying efforts have focused on changing federal regulations to allow the use of technologies other than the polygraph for lie detection. “The Department of Defense National Center of Credibility Assessment (NCCA) is in charge of oversight of validation and pilot projects throughout the U.S. government of new deception detection technologies,” Warner wrote in an email. “DoD Directive 5210.91 and ODNI Security Agent Directive 2 currently prohibit the use of any credibility assessment solution other than polygraph. For this reason, we have contacted government agencies to consider the use of EyeDetect and other new technologies.”

After finding success in corporate applications and sheriff’s offices, Converus has set its sights on large federal agencies that could apply its EyeDetect technology to a host of uses, including employee clearance screenings and border security. Unlike a polygraph, a device which relies on an operator asking questions and measuring physiological responses like heart rate and perspiration, Converus’s technology measures “cognitive load” with an algorithm that processes eye movement.

Keep reading

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading

‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man’s widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. 

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. 

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. 

“Without Eliza, he would still be here,” she told the outlet.  

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. 

Keep reading

Critics Warn of ‘a Dragnet of Surveillance’ as U.S. Pushes Ahead With Plans for More ‘Smart’ Cities

U.S. Transportation Secretary Pete Buttigieg last week announced $94 million in grant awards to fund 59 smart city technology projects across the country.

Despite widespread and mounting pushback against biometric surveillance and control systems associated with smart city technologies and the failure of the U.S. Department of Transportation’s (DOT) previous attempt to grant-fund smart city transformation in Columbus, Ohio, Buttigieg told The Verge he thinks “smart city technologies matter more than ever.”

Cities just need to take a different approach — experimenting with and testing out different technologies first, rather than implementing a “grand unified system” all at once, Buttigieg said.

The new grants, part of the Strengthening Mobility and Revolutionizing Transportation (SMART) Grants Program, are the first round of $500 million in funding that will be awarded for smaller smart mobility projects over the next five years, authorized under the 2021 Bipartisan Infrastructure Law.

In this funding round, DOT awarded smart grants for a range of projects, including drone surveillance or delivery, smart traffic signals, connected vehicles, autonomous vehicles, smart grid development, intelligent sensors and other Internet of Things (IoT) infrastructure. Some cities, including Los Angeles (LA), received multiple grants.

Smart city development typically focuses on the implementation of technologies like the IoT, 5G, cloud and edge computing, and biometric surveillance to track, manage, control and extract profit from an array of urban processes.

Whitney Webb, an investigative journalist and smart cities critic, said the smart city infrastructure is meant to facilitate the development of cities “micromanaged by technocrats via an all-encompassing system of mass surveillance and a vast array of ‘internet of things’ devices that provide a constant and massive stream of data that is analyzed by artificial intelligence (AI).”

Keep reading

Could the RESTRICT Act Criminalize the Use of VPNs?

Would the RESTRICT Act—a.k.a. the TikTok ban bill—criminalize the use of VPNs? That’s the rumor floating around about the legislation, which was introduced in the Senate by Sen. Mark Warner (D–Va.) earlier this month. Warner’s office has said his bill wouldn‘t do this… but its broad language leaves room for doubt. And the act is still insanely far-reaching and could have a huge range of deleterious effects, even if it doesn’t criminalize people using a VPN to access TikTok.

VPN stands for virtual private network, and there are several different kinds, but their general aim is the same: keeping your digital activities and location private. Using a VPN with your computer, phone, or another internet-enabled device can do things like mask your I.P. address and encrypt your internet connection. It’s a great way to get around location-based firewalls (a.k.a. geoblocking) and other forms of internet censorship.

For this reason, VPNs are popular in countries that exercise authoritarian control over what their citizens can access online. It’s sad that this contingent could soon include U.S. citizens, but include us it does, as both Republicans and Democrats get more and more gung-ho about banning the popular video platform TikTok.

Sen. Josh Hawley (R–Mo.) introduced one TikTok ban bill back in January. Hawley’s bill would direct the president to use the International Emergency Economic Powers Act to specifically “block and prohibit all transactions” and to “prevent commercial operation of” TikTok parent company ByteDance in the U.S.

The latest legislation is more extensive—and even more invasive.

Keep reading

8 ways your phone is tracking you that you can stop now

You understand that your phone knows where you’re located.

This is how GPS works, how Find My Friends sees your location, and why you get local ads on Facebook and Google.

Like other data on your phone, that location data is a hot commodity for internet marketers in today’s digital economy.

Targeted advertising is one of the biggest enterprises on the web.

Companies are eager to serve you ads for products you’re likely to buy, and that data helps them hit their mark.

Some companies have even made this their primary business model. Tap or click here to see one shocking way Facebook tracks your data.

Thankfully, you don’t have to stand for this kind of data collection if you’re uncomfortable with it.

These tactics are legal because the companies behind them give you a choice to opt in or out, but not everyone knows how to change the settings.

We’ll show you how to stop your phone from tracking you.

Keep reading

Governance By Artificial Intelligence: The Ultimate Unaccountable Tyranny

It’s no secret that globalist institutions are obsessed with artificial intelligence as some kind of technological prophecy. They treat it as if it is almost supernatural in its potential and often argue that every meaningful industrial and social innovation in the near future will owe its existence to AI. The World Economic Forum cites AI as the singular key to the rise of what they call the “Fourth Industrial Revolution.” In their view, there can be no human progress without the influence of AI algorithms, making human input almost obsolete.

This delusion is often promoted by globalist propagandists. For example, take a look at the summarized vision of WEF member Yuval Harari, who actually believes that AI has creative ability that will replace human imagination and innovation.  Not only that, but Harari has consistently argued in the past that AI will run the world much better than human beings ever could.

Harari’s examples of AI creativity might sound like extreme naivety to many of us, but he knows exactly what he is doing in misrepresenting the capabilities of algorithms.  Games like chess and Go are games of patterns restricted by rules, there only so many permutations of these patterns in any given scenario and AI is simply faster at spotting them than most humans because that is what it is designed to do by software creators.  This is no different that solving a mathematical equation; just because a calculator is faster than you does not mean it is “creative.”

There is a big difference between cognitive automation and cognitive autonomy.  AI is purely automation; it will play the games it is programmed to play and will learn to play them well, but it will never have an epiphany one day and create a new and unique game from scratch unless it is coded to do so.  AI will never have fun playing this new game it made, nor feel the joy of sharing that game with others, so why would it bother?  It will never seek to contribute to the world any more than it is pre-programmed to do.

The manner in which globalists hype AI is very tactical, however.  When Harari claims that many people will become part of the “useless class” once AI takes over the economy, he is hinting at another globalist ideology based on elitism – Transhumanism.  The goal of transhumanism is to one day merge human bodies and human minds with technology and AI, and only a limited group of people will have the resources to accomplish this (the globalists).

Keep reading

MICROSOFT LAID OFF ITS ENTIRE AI “ETHICS AND SOCIETY” DIVISION

Microsoft got rid of its entire company division devoted to AI “ethics and society” during its January layoffs, according to a report from Platformer.

Though the company still has an office of responsible AI, it was the job of the ethics and society staff to address how AI technology is likely to impact human society in context and communicate with product teams accordingly.

“People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,'” one former ethicist told Platformer. “Our job was to show them and to create rules in areas where there were none.”

But now, as Microsoft races to jam OpenAI software into seemingly every product that it can, the ethics and society department is gone — a telling sign that Microsoft is more focused on profitability and getting AI-driven products to market than ensuring that those products remain a positive force for society as a whole.

Keep reading

Giving up biometrics at US airports soon won’t be optional, transport security chief says

The chief of the Transportation Security Administration (TSA) David Pekoske said that the agency is considering biometric technology to reduce traveler processing times and reduce the number of screening officers. He made the comments at the South by Southwest conference, which focused on aviation security.

Pekoske noted that the TSA’s role is maintaining security and the transportation system and staying ahead of threats. For those reasons, it is “critically important that this system has as little friction as it possibly can, while we provide for safety and security.”

The TSA has been relying on biometric technology in the identification verification process. According to the agency, the newest technology it has been using is over 99% effective and does not have problems identifying darker-skinned people like the old technology.

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “[We’re] upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

Pekoske said that the agency will ensure it remains transparent with the public about the data that is taken, what it is used for, and for how long it will be stored. For now, he said that travelers can opt out of processes they are not comfortable with.

According to The Dallas Morning News, giving up biometric data for travel will eventually not be optional.

Keep reading