Microsoft has been granted a patent that would allow the company to make a chatbot using the personal information of deceased people.
The patent describes creating a bot based on the “images, voice data, social media posts, electronic messages”, and more personal information.
“The specific person [who the chat bot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity etc”, it goes on to say.
“The specific person may also correspond to oneself (e.g., the user creating/training the chat bot,” Microsoft also describes – implying that living users could train a digital replacement in the event of their death.
WHEN HACKERS EXPLOITED a bug in Parler to download all of the right-wing social media platform’s contents last week, they were surprised to find that many of the pictures and videos contained geolocation metadata revealing exactly how many of the site’s users had taken part in the invasion of the US Capitol building just days before. But the videos uploaded to Parler also contain an equally sensitive bounty of data sitting in plain sight: thousands of images of unmasked faces, many of whom participated in the Capitol riot. Now one website has done the work of cataloging and publishing every one of those faces in a single, easy-to-browse lineup.
Late last week, a website called Faces of the Riot appeared online, showing nothing but a vast grid of more than 6,000 images of faces, each one tagged only with a string of characters associated with the Parler video in which it appeared. The site’s creator tells WIRED that he used simple open source machine learning and facial recognition software to detect, extract, and deduplicate every face from the 827 videos that were posted to Parler from inside and outside the Capitol building on January 6, the day when radicalized Trump supporters stormed the building in a riot that resulted in five people’s deaths. The creator of Faces of the Riot says his goal is to allow anyone to easily sort through the faces pulled from those videos to identify someone they may know or recognize who took part in the mob, or even to reference the collected faces against FBI wanted posters and send a tip to law enforcement if they spot someone.
An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.
The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”
Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”
Privacy and security experts have warned for many years about privacy and cybersecurity risks associated with ALL “Smart” and wireless technology – cell phones (see 1, 2, 3), medical devices and implants (see 1, 2), personal and “Smart” home devices and wearables (see 1, 2, 3, 4, 5, 6), utility “Smart” meters (electric, gas, and water), and everything that uses Internet of Things (IoT) technology (see 1, 2). Last month, researchers from Carnegie Melon University proposed product warning labels that would make it easier for consumers to understand this.
Of course, manufacturers may not be in a hurry to use them because their “Smart” products allow them to collect data on consumers to analyze and sell to 3rd parties. This is referred to as “Surveillance Capitalism.” “Smart” light bulbs can be used for this as well.
It is official, unless airline passengers are willing to wear motorcycle helmets or Daft Punk style masks, the Feds can use facial and iris recognition to identify nearly everyone.
According to an S&T press release, a pilot program run by DHS proves they can use facial/iris recognition to identify airline passengers.
The in-person rally, held at the Maryland Test Facility (MdTF), included 10 days of human testing during which six face and/or iris acquisition systems and 13 matching algorithms were tested with help from 582 diverse test volunteers representing 60 countries.
What is DHS’s so-called motivation to ID everyone?
Because artificial intelligence wasn’t already frightening enough, researchers decided to teach computers how to identify a person’s political ideology based upon their facial appearance and expressions.
The study was led by Stanford researcher Michal Kosinski, who already caused a stir in 2017 by programming machines that could determine whether you are gay or straight based on your appearance.
Father-and-son scientists George and Hendrik Poinar have helped set the stage in recent decades for a dramatic advance: Resurrecting extinct species.
George led early explorations of the notion before essential technology had been invented. Later, Hendrik and George together pioneered new methods of extracting and sequencing ancient DNA.
To turn those genetic blueprints into living organisms, however, is quite a challenging proposition. One approach is to take a similar existing species and modify individual genes to match its extinct relative. But this of course wouldn’t be an exact match of the original animal.
Another approach could be to use a donor egg from a modern animal but with its DNA replaced by that of the extinct animal.
Here, six of the candidates for de-extinction, and the modern counterparts that could shepherd them into being.
EDITOR’S NOTE: WTF??? Have these people never seen science fiction movies? Read a book? Even a comic book?
The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly.
Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you.