Canon Installs Smile Recognition Technology In Chinese Offices; Employees Can Only Enter Rooms If They Smile

Canon has reportedly installed “smile recognition” technology in the offices of its Chinese subsidiary, with employees only permitted to enter rooms or book meetings if they are smiling.

The AI-backed technology was first reported by The Financial Times, on the subject of how Chinese corporations are tracking employees with the help of cutting-edge technology.

As The Verge noted, “Firms are monitoring which programs employees use on their computers to gauge their productivity; using CCTV cameras to measure how long they take on their lunch break; and even tracking their movements outside the office using mobile apps.”

“Workers are not being replaced by algorithms and artificial intelligence. Instead, the management is being sort of augmented by these technologies,” King’s College London academic Nick Srnicek told The Financial Times. “Technologies are increasing the pace for people who work with machines instead of the other way around, just like what happened during the industrial revolution in the 18th century.”

Canon first announced its “Smiley Face” intelligent ecosystem in October 2020.

Keep reading

Microsoft President Warns 2024 Will Look Like ‘1984’ if We Don’t Stop AI Police State

As TFTP reported in August of last year, police officers in Michigan were equipped with “Smart Helmets” which allowed them to remotely scan passengers for symptoms of COVID-19. While this was widely accepted by many because of the massive fear campaign pushed on society by the mainstream media, thanks to leaps and bounds in artificial intelligence, detecting fever was only a small portion of what the helmets can do and the rest of its function is solely reserved for the police state.

The Smart Helmet is not limited to temperature body scans which any laser guided thermometer can do, not in the slightest. AI-driven, facial recognition software is installed which can provide the police officer with information related to outstanding warrants, if an individual is identified on a terror watch list or a no-fly list, and can read license plates for outstanding warrants, stolen vehicle information, criminal histories, etc. Even if you are completely innocent, you are subject to these scans.

The helmets were rolled out under the guise of protecting society from COVID-19 but even after temperature screenings were proven futile in the fight against coronavirus, the technology remains.

Earlier this year, TFTP reported on how Joe Biden picked up where Donald Trump left off in regard to the border wall. While he doesn’t plan on constructing a physical wall, Biden’s plan is far more sinister and will deploy AI technology to create a “smart wall” akin to something out of a dystopian science fiction movie.

The smart wall will not be as obvious and physically offensive as an actual wall, but aerial drones, infrared cameras, motion sensors, radar, facial recognition, and artificial intelligence is far more ominous than steal and bricks. According to the Nation:

These implements have the veneer of scientific impartiality and rarely produce contentious imagery, which makes them both palatable to a broadly apathetic public and insidiously dangerous.

Unlike a border wall, an advanced virtual “border” doesn’t just exist along the demarcation dividing countries. It extends hundreds of miles inland along the “Constitution-free zone” of enhanced Border Patrol authority. It’s in private property and along domestic roadways. It’s at airports, where the government is ready to roll out a facial recognition system with no age limit that includes travelers on domestic flights that never cross a border.

A frontline Customs and Border Protection officer, who asked not to be identified as they were not authorized to speak publicly, told The Nation that they had concerns about the growth of this technology, especially with the agency “expanding its capabilities and training its armed personnel to act as a federal police.” These capabilities were showcased this summer when CBP agents joined other often-unidentified federal forces in cities with Black Lives Matter protests. The deployments included the use of ground and aerial surveillance tech, including drones, as first reported by The Nation.

This sort of mission creep illustrates the folly in complacency over the use of advanced surveillance tech on the grounds that it is for “border enforcement.” It is always easier to add to the list of acceptable data uses than it is to limit them, largely owing to our security paranoia where any risk is unacceptable.

One of the most minacious aspects of this smart wall is that it will extend the police and surveillance state tactics used at airports — around the entire country. Imagine you are checking into a flight at an airport, excited to go on vacation but when you attempt to get your ticket, you are told you cannot fly. Suddenly, you are surrounded by security and hauled off for questioning. You have committed no crime and you have no recourse to ask why you cannot fly. This happens every day in this country as Homeland Security enforces the unconstitutional No Fly List.

While AI technology has been around for a while, when coupled with the encroaching police state tactics being implemented around the planet in the name of COVID-19 safety, the idea of AI tyranny is starting to get lots of folks worried.

Keep reading

Killer drone ‘hunted down a human target’ without being told to

After a United Nations commission to block killer robots was shut down in 2018, a new report from the international body now says the Terminator-like drones are now here.

Last year “an autonomous weaponized drone hunted down a human target last year” and attacked them without being specifically ordered to, according to a report from the UN Security Council’s Panel of Experts on Libya, published in March 2021 that was published in the New Scientist magazine and the Star.

The March 2020 attack was in Libya and perpetrated by a Kargu-2 quadcopter drone produced by Turkish military tech company STM “during a conflict between Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Army,” the Star reports, adding: “The Kargu-2 is fitted with an explosive charge and the drone can be directed at a target in a kamikaze attack, detonating on impact.”

The drones were operating in a “highly effective” autonomous mode that required no human controller and the report notes:

“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability” – suggesting the drones attacked on their own.

Keep reading

‘Weapons of the future’: Russia has launched mass production of autonomous high-tech WAR ROBOTS, Defense Minister Shoigu announces

The Russian military will soon be equipped with autonomous war robots capable of acting independently on the battlefield, Defense Minister Sergey Shoigu has said, adding that Moscow has launched mass production of such machines.

“These are not just some experimental prototypes but robots that can really be shown in sci-fi movies since they can fight on their own,” the minister told the Russian Zvezda broadcaster during the ‘New Knowledge’ forum, on Friday. Held in several Russian cities from May 20 to May 22, the forum is a series of educational events featuring top specialists in a variety of fields.

“A major effort” has been made to develop “the weapons of the future,” Shoigu said, referring to war robots equipped with artificial intelligence (AI). The bots, which are said to be capable of independently accessing a combat situation, are part of the new state-of-the-art arsenal that the Russian military is currently focused on.

Keep reading

Pentagon Marches Towards AI Taking The Kill Shot

Dozens of autonomous war machines capable of deadly force conducted a field training exercise south of Seattle last August. The exercise involved no human operators but strictly robots powered with artificial intelligence, seeking mock enemy combatants.

The exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon, armed the robots with radio transmitters designed to simulate a weapon firing. The drill expanded the Pentagon’s understanding of how automation in military systems on the modern battlefield can work together to eliminate enemy combatants.

“The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed,” according to WIRED

It’s undeniable artificial intelligence will be the face of warfare for years to come. Military planners are moving ahead with incorporating autonomous weapons systems on the modern battlefield.

General John Murray of the US Army Futures Command told an audience at the US Military Academy in April that swarms of robots will likely force the military to decide if a human needs to intervene before a robot engages the enemy.

Keep reading

Google Veterans Team Up With Gov’t to Fill the Sky with AI Drones That Predict Your Behavior

Imagine in the near future, a swarm of tiny drones patrolling the skies across the country. These drones are not being flown by any pilot and are entirely autonomous, carrying out their directives coded into them during manufacturing — surveil, record, follow, and even predict your next move. Sounds like something out of a dystopian Sci-Fi flick, right? Well, there is no need to imagine this scenario or to watch it in a movie.

It is already here.

Adam Bry and Abraham Bachrach, the CEO and CTO, respectively at a company called Skydio have helped usher in this new reality. The duo started together at MIT before moving on to Google and working on Project Wing Google. 

After moving on from building self-flying aircraft at Google, the duo founded Skydio and has been giving their autonomous drones to police departments ever since — for free.

“We‘re solving a lot of the core problems that are needed to make drones trustworthy and able to fly themselves,” Bry told Forbes in an interview this week. “Autonomy—that core capability of giving a drone the skills of an expert pilot built in, in the software and the hardware—that’s really what we’re all about as a company.”

According to Forbes, Skydio “claims to be shipping the most advanced AI-powered drone ever built: a quadcopter that costs as little as $1,000, which can latch on to targets and follow them, dodging all sorts of obstacles and capturing everything on high-quality video. Skydio claims that its software can even predict a target’s next move, be that target a pedestrian or a car.”

Keep reading

Chatbots That Resurrect the Dead: Legal Experts Weigh in on “Disturbing” Technology

It was recently revealed that in 2017 Microsoft patented a chatbot which, if built, would digitally resurrect the dead. Using AI and machine learning, the proposed chatbot would bring our digital persona back to life for our family and friends to talk to. When pressed on the technology, Microsoft representatives admitted that the chatbot was “disturbing”, and that there were currently no plans to put it into production.

Still, it appears that the technical tools and personal data are in place to make digital reincarnations possible. AI chatbots have already passed the “Turing Test”, which means they’ve fooled other humans into thinking they’re human, too. Meanwhile, most people in the modern world now leave behind enough data to teach AI programmes about our conversational idiosyncrasies. Convincing digital doubles may be just around the corner.

But there are currently no laws governing digital reincarnation. Your right to data privacy after your death is far from set in stone, and there is currently no way for you to opt out of being digitally resurrected. This legal ambiguity leaves room for private companies to make chatbots out of your data after you’re dead.

Our research has looked at the surprisingly complex legal question of what happens to your data after you die. At present, and in the absence of specific legislation, it’s unclear who might have the ultimate power to reboot your digital persona after your physical body has been put to rest.

Keep reading

Documentary Exposes How Facial Recognition Tech Doesn’t See Dark Faces Accurately

What not to NOT love about Artificial Intelligence (AI)?

  • Millions of jobs being lost (see 12)
  • Censorship, unwarranted surveillance, and other unethical and dangerous applications (see 12345)
  • Inaccuracies that can lead to life-altering consequences

One documentary reveals more unscrupulous details:

CODED BIAS explores the fallout of MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.

Modern society sits at the intersection of two crucial questions: What does it mean when artificial intelligence increasingly governs our liberties? And what are the consequences for the people AI is biased against? When MIT Media Lab researcher Joy Buolamwini discovers that most facial-recognition software does not accurately identify darker-skinned faces and the faces of women, she delves into an investigation of widespread bias in algorithms. As it turns out, artificial intelligence is not neutral, and women are leading the charge to ensure our civil rights are protected.

Keep reading

Pieces Of Color: When YouTube’s oversensitive filters think CHESS VIDEOS are racist, will language have to adapt to Big Tech?

With all its talk of black-on-white war, YouTube’s “hate speech”-filtering AI can’t tell the difference between chess players and violent racists. Perhaps leaving robots in charge of the English language isn’t such a good idea.

Croatian chess player Antonio Radic, known to his million subscribers as ‘Agadmator,’ runs the world’s most popular chess channel on YouTube. Last summer he found his account suspended due to its “harmful and dangerous” content. Radic, who was in the middle of a show with Grandmaster Hikaru Nakamura at the time, was puzzled. He received no explanation for the ban, which was reversed on appeal, but speculated that YouTube’s censorship algorithm may have heard him say something like “black goes to B6 instead of C6, white will always be better.”

“If that’s the case, I’m sure all [all of] my 1,800 videos will be taken down as it’s black against white to the death in every video,” he told the Sun at the time.

Keep reading