Maybe You Missed It, but the Internet ‘Died’ Five Years Ago

If you search the phrase i hate texting on Twitter and scroll down, you will start to notice a pattern. An account with the handle @pixyIuvr and a glowing heart as a profile picture tweets, “i hate texting i just want to hold ur hand,” receiving 16,000 likes. An account with the handle @f41rygf and a pink orb as a profile picture tweets, “i hate texting just come live with me,” receiving nearly 33,000 likes. An account with the handle @itspureluv and a pink orb as a profile picture tweets, “i hate texting i just wanna kiss u,” receiving more than 48,000 likes.

There are slight changes to the verb choice and girlish username and color scheme, but the idea is the same each time: I’m a person with a crush in the age of smartphones, and isn’t that relatable? Yes, it sure is! But some people on Twitter have wondered whether these are really, truly, just people with crushes in the age of smartphones saying something relatable. They’ve pointed at them as possible evidence validating a wild idea called “dead-internet theory.”

Let me explain. Dead-internet theory suggests that the internet has been almost entirely taken over by artificial intelligence. Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls, and idly curious lovers of chitchat. One might, for example, point to @_capr1corn, a Twitter account with what looks like a blue orb with a pink spot in the middle as a profile picture. In the spring, the account tweeted “i hate texting come over and cuddle me,” and then “i hate texting i just wanna hug you,” and then “i hate texting just come live with me,” and then “i hate texting i just wanna kiss u,” which got 1,300 likes but didn’t perform as well as it did for @itspureluv. But unlike lots of other online conspiracy theories, this one has a morsel of truth to it. Person or bot: Does it really matter?

Keep reading

The Future Of Work At Home: Mandatory AI Camera Surveillance

Colombia-based call center workers who provide outsourced customer service to some of the nation’s largest companies are being pressured to sign a contract that lets their employer install cameras in their homes to monitor work performance, an NBC News investigation has found.

Six workers based in Colombia for Teleperformance, one of the world’s largest call center companies, which counts Apple, Amazon and Uber among its clients, said that they are concerned about the new contract, first issued in March. The contract allows monitoring by AI-powered cameras in workers’ homes, voice analytics and storage of data collected from the worker’s family members, including minors. Teleperformance employs more than 380,000 workers globally, including 39,000 workers in Colombia.

“The contract allows constant monitoring of what we are doing, but also our family,” said a Bogota-based worker on the Apple account who was not authorized to speak to the news media. “I think it’s really bad. We don’t work in an office. I work in my bedroom. I don’t want to have a camera in my bedroom.”

The worker said that she signed the contract, a copy of which NBC News has reviewed, because she feared losing her job. She said that she was told by her supervisor that she would be moved off the Apple account if she refused to sign the document. She said the additional surveillance technology has not yet been installed.

The concerns of the workers, who all spoke on the condition of anonymity because they were not authorized to speak to the media, highlight a pandemic-related trend that has alarmed privacy and labor experts: As many workers have shifted to performing their duties at home, some companies are pushing for increasing levels of digital monitoring of their staff in an effort to recreate the oversight of the office at home.

Keep reading

The Pentagon Is Experimenting With Using Artificial Intelligence To “See Days In Advance”

U.S. Northern Command (NORTHCOM) recently conducted a series of tests known as the Global Information Dominance Experiments, or GIDE, which combined global sensor networks, artificial intelligence (AI) systems, and cloud computing resources in an attempt to “achieve information dominance” and “decision-making superiority.” According to NORTHCOM leadership, the AI and machine learning tools tested in the experiments could someday offer the Pentagon a robust “ability to see days in advance,” meaning it could predict the future with some reliability based on evaluating patterns, anomalies, and trends in massive data sets. While the concept sounds like something out of Minority Report, the commander of NORTHCOM says this capability is already enabled by tools readily available to the Pentagon. 

General Glen VanHerck, Commander of NORTHCOM and North American Aerospace Defense Command (NORAD), told reporters at the Pentagon this week that this was the third test of GIDE, conducted in conjunction with all 11 combatant commands “collaborating in the same information space using the same exact capabilities.” The experiment largely centered around contested logistics and information advantage, two cornerstones of the new warfighting paradigm recently proposed by the Vice Chairman of the Joint Chiefs of Staff. A full transcript of VanHerck’s press briefing is available online

Keep reading

Canon Installs Smile Recognition Technology In Chinese Offices; Employees Can Only Enter Rooms If They Smile

Canon has reportedly installed “smile recognition” technology in the offices of its Chinese subsidiary, with employees only permitted to enter rooms or book meetings if they are smiling.

The AI-backed technology was first reported by The Financial Times, on the subject of how Chinese corporations are tracking employees with the help of cutting-edge technology.

As The Verge noted, “Firms are monitoring which programs employees use on their computers to gauge their productivity; using CCTV cameras to measure how long they take on their lunch break; and even tracking their movements outside the office using mobile apps.”

“Workers are not being replaced by algorithms and artificial intelligence. Instead, the management is being sort of augmented by these technologies,” King’s College London academic Nick Srnicek told The Financial Times. “Technologies are increasing the pace for people who work with machines instead of the other way around, just like what happened during the industrial revolution in the 18th century.”

Canon first announced its “Smiley Face” intelligent ecosystem in October 2020.

Keep reading

Microsoft President Warns 2024 Will Look Like ‘1984’ if We Don’t Stop AI Police State

As TFTP reported in August of last year, police officers in Michigan were equipped with “Smart Helmets” which allowed them to remotely scan passengers for symptoms of COVID-19. While this was widely accepted by many because of the massive fear campaign pushed on society by the mainstream media, thanks to leaps and bounds in artificial intelligence, detecting fever was only a small portion of what the helmets can do and the rest of its function is solely reserved for the police state.

The Smart Helmet is not limited to temperature body scans which any laser guided thermometer can do, not in the slightest. AI-driven, facial recognition software is installed which can provide the police officer with information related to outstanding warrants, if an individual is identified on a terror watch list or a no-fly list, and can read license plates for outstanding warrants, stolen vehicle information, criminal histories, etc. Even if you are completely innocent, you are subject to these scans.

The helmets were rolled out under the guise of protecting society from COVID-19 but even after temperature screenings were proven futile in the fight against coronavirus, the technology remains.

Earlier this year, TFTP reported on how Joe Biden picked up where Donald Trump left off in regard to the border wall. While he doesn’t plan on constructing a physical wall, Biden’s plan is far more sinister and will deploy AI technology to create a “smart wall” akin to something out of a dystopian science fiction movie.

The smart wall will not be as obvious and physically offensive as an actual wall, but aerial drones, infrared cameras, motion sensors, radar, facial recognition, and artificial intelligence is far more ominous than steal and bricks. According to the Nation:

These implements have the veneer of scientific impartiality and rarely produce contentious imagery, which makes them both palatable to a broadly apathetic public and insidiously dangerous.

Unlike a border wall, an advanced virtual “border” doesn’t just exist along the demarcation dividing countries. It extends hundreds of miles inland along the “Constitution-free zone” of enhanced Border Patrol authority. It’s in private property and along domestic roadways. It’s at airports, where the government is ready to roll out a facial recognition system with no age limit that includes travelers on domestic flights that never cross a border.

A frontline Customs and Border Protection officer, who asked not to be identified as they were not authorized to speak publicly, told The Nation that they had concerns about the growth of this technology, especially with the agency “expanding its capabilities and training its armed personnel to act as a federal police.” These capabilities were showcased this summer when CBP agents joined other often-unidentified federal forces in cities with Black Lives Matter protests. The deployments included the use of ground and aerial surveillance tech, including drones, as first reported by The Nation.

This sort of mission creep illustrates the folly in complacency over the use of advanced surveillance tech on the grounds that it is for “border enforcement.” It is always easier to add to the list of acceptable data uses than it is to limit them, largely owing to our security paranoia where any risk is unacceptable.

One of the most minacious aspects of this smart wall is that it will extend the police and surveillance state tactics used at airports — around the entire country. Imagine you are checking into a flight at an airport, excited to go on vacation but when you attempt to get your ticket, you are told you cannot fly. Suddenly, you are surrounded by security and hauled off for questioning. You have committed no crime and you have no recourse to ask why you cannot fly. This happens every day in this country as Homeland Security enforces the unconstitutional No Fly List.

While AI technology has been around for a while, when coupled with the encroaching police state tactics being implemented around the planet in the name of COVID-19 safety, the idea of AI tyranny is starting to get lots of folks worried.

Keep reading

Killer drone ‘hunted down a human target’ without being told to

After a United Nations commission to block killer robots was shut down in 2018, a new report from the international body now says the Terminator-like drones are now here.

Last year “an autonomous weaponized drone hunted down a human target last year” and attacked them without being specifically ordered to, according to a report from the UN Security Council’s Panel of Experts on Libya, published in March 2021 that was published in the New Scientist magazine and the Star.

The March 2020 attack was in Libya and perpetrated by a Kargu-2 quadcopter drone produced by Turkish military tech company STM “during a conflict between Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Army,” the Star reports, adding: “The Kargu-2 is fitted with an explosive charge and the drone can be directed at a target in a kamikaze attack, detonating on impact.”

The drones were operating in a “highly effective” autonomous mode that required no human controller and the report notes:

“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability” – suggesting the drones attacked on their own.

Keep reading

‘Weapons of the future’: Russia has launched mass production of autonomous high-tech WAR ROBOTS, Defense Minister Shoigu announces

The Russian military will soon be equipped with autonomous war robots capable of acting independently on the battlefield, Defense Minister Sergey Shoigu has said, adding that Moscow has launched mass production of such machines.

“These are not just some experimental prototypes but robots that can really be shown in sci-fi movies since they can fight on their own,” the minister told the Russian Zvezda broadcaster during the ‘New Knowledge’ forum, on Friday. Held in several Russian cities from May 20 to May 22, the forum is a series of educational events featuring top specialists in a variety of fields.

“A major effort” has been made to develop “the weapons of the future,” Shoigu said, referring to war robots equipped with artificial intelligence (AI). The bots, which are said to be capable of independently accessing a combat situation, are part of the new state-of-the-art arsenal that the Russian military is currently focused on.

Keep reading

Pentagon Marches Towards AI Taking The Kill Shot

Dozens of autonomous war machines capable of deadly force conducted a field training exercise south of Seattle last August. The exercise involved no human operators but strictly robots powered with artificial intelligence, seeking mock enemy combatants.

The exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon, armed the robots with radio transmitters designed to simulate a weapon firing. The drill expanded the Pentagon’s understanding of how automation in military systems on the modern battlefield can work together to eliminate enemy combatants.

“The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed,” according to WIRED

It’s undeniable artificial intelligence will be the face of warfare for years to come. Military planners are moving ahead with incorporating autonomous weapons systems on the modern battlefield.

General John Murray of the US Army Futures Command told an audience at the US Military Academy in April that swarms of robots will likely force the military to decide if a human needs to intervene before a robot engages the enemy.

Keep reading

Google Veterans Team Up With Gov’t to Fill the Sky with AI Drones That Predict Your Behavior

Imagine in the near future, a swarm of tiny drones patrolling the skies across the country. These drones are not being flown by any pilot and are entirely autonomous, carrying out their directives coded into them during manufacturing — surveil, record, follow, and even predict your next move. Sounds like something out of a dystopian Sci-Fi flick, right? Well, there is no need to imagine this scenario or to watch it in a movie.

It is already here.

Adam Bry and Abraham Bachrach, the CEO and CTO, respectively at a company called Skydio have helped usher in this new reality. The duo started together at MIT before moving on to Google and working on Project Wing Google. 

After moving on from building self-flying aircraft at Google, the duo founded Skydio and has been giving their autonomous drones to police departments ever since — for free.

“We‘re solving a lot of the core problems that are needed to make drones trustworthy and able to fly themselves,” Bry told Forbes in an interview this week. “Autonomy—that core capability of giving a drone the skills of an expert pilot built in, in the software and the hardware—that’s really what we’re all about as a company.”

According to Forbes, Skydio “claims to be shipping the most advanced AI-powered drone ever built: a quadcopter that costs as little as $1,000, which can latch on to targets and follow them, dodging all sorts of obstacles and capturing everything on high-quality video. Skydio claims that its software can even predict a target’s next move, be that target a pedestrian or a car.”

Keep reading

Chatbots That Resurrect the Dead: Legal Experts Weigh in on “Disturbing” Technology

It was recently revealed that in 2017 Microsoft patented a chatbot which, if built, would digitally resurrect the dead. Using AI and machine learning, the proposed chatbot would bring our digital persona back to life for our family and friends to talk to. When pressed on the technology, Microsoft representatives admitted that the chatbot was “disturbing”, and that there were currently no plans to put it into production.

Still, it appears that the technical tools and personal data are in place to make digital reincarnations possible. AI chatbots have already passed the “Turing Test”, which means they’ve fooled other humans into thinking they’re human, too. Meanwhile, most people in the modern world now leave behind enough data to teach AI programmes about our conversational idiosyncrasies. Convincing digital doubles may be just around the corner.

But there are currently no laws governing digital reincarnation. Your right to data privacy after your death is far from set in stone, and there is currently no way for you to opt out of being digitally resurrected. This legal ambiguity leaves room for private companies to make chatbots out of your data after you’re dead.

Our research has looked at the surprisingly complex legal question of what happens to your data after you die. At present, and in the absence of specific legislation, it’s unclear who might have the ultimate power to reboot your digital persona after your physical body has been put to rest.

Keep reading