John F Kennedy murder mystery could be solved thanks to new AI technology

AI could finally help solve the mystery of a “second shooter” in the assassination of US President John F Kennedy almost 60 years ago.

Experts believe Artificial Intelligence, combined with new advances in digital image processing, will either prove or disprove beyond all doubt whether another gunman took aim at JFK in Dallas on November 22, 1963.

The fresh evidence is contained in a little-known home movie shot by local maintenance man Orville Nix, whose descendants have launched a legal bid to get it back from the US government.

His clip – unlike the famous one shot by Abraham Zapruder that has been seen by millions – was filmed from the centre of Dealey Plaza as Kennedy was hit in the head while he and wife Jackie waved at crowds from the back of his open-top limousine.

As a result, it provides the only known unobstructed view of the “grassy knoll” where conspiracy theorists have long claimed another sniper – or snipers – were concealed.

Nix’s film was last examined in 1978 by photo experts hired by the US’s House Select Committee on Assassinations.

Based in part on that analysis, the panel sensationally concluded JFK “was probably assassinated as the result of a conspiracy” and that “two gunmen” likely fired on him.

But the technology of the time left experts in doubt about whether the home movie actually proves this – and the original film subsequently “vanished from view”, with only imperfect copies remaining in the hands of government officials.

Keep reading

Turn Off, Don’t Automate, the Killing Machine

The quest to develop and refine technologically advanced means to commit mass homicide continues on, with Pentagon tacticians ever eager to make the military leaner and more lethal. Drone swarms already exist, and as insect-facsimile drones are marketed and produced, we can expect bug drone swarms to appear soon in the skies above places where suspected “bad guys” are said to reside—along with their families and neighbors. Following the usual trajectory, it is only a matter of time before surveillance bug drones are “upgraded” for combat, making it easier than ever to kill human beings by whoever wishes to do so, whether military personnel, factional terrorists, or apolitical criminals. The development of increasingly lethal and “creative” means to commit homicide forges ahead not because anyone needs it but because it is generously funded by the U.S. Congress under the assumption that anything labeled a tool of “national defense” is, by definition, good.

To some there may seem to be merits to the argument from necessity for drones, given the ongoing military recruitment crisis. There are many good reasons why people wish not to enlist in the military anymore, but rather than review the missteps taken and counterproductive measures implemented in the name of defense throughout the twenty-first century, administrators ignore the most obvious answer to the question why young people are less enthusiastic than ever before to sign their lives away. Why did the Global War on Terror spread from Afghanistan and Iraq to engulf other countries as well? Critics have offered persuasive answers to this question, above all, that killing, torturing, maiming, and terrorizing innocent people led to an outpouring of sympathy for groups willing to resist the invaders of their lands. As a direct consequence of U.S. military intervention, Al Qaeda franchises such as ISIS emerged, proliferated, and spread. Yet the military plows ahead undeterred in its professed mission to eliminate “the bad guys,” with the killers either oblivious or somehow unaware that they are the primary creators of “the bad guys.”

Meanwhile, the logic of automation has been openly and enthusiastically embraced as the way of the future for the military, as in so many other realms. Who needs soldiers anyway, given that they can and will be replaced by machines? Just as grocery stores today often have more self-checkout stations than human cashiers, the military has been replacing combat pilots with drone operators for years. Taking human beings altogether out of the killing loop is the inevitable next step, because war architects focus on lethality, as though it were the only measure of military success. Removing “the human factor” from warfare will increase lethality and may decrease, if not eliminate, problems such as PTSD. But at what price?

Never a very self-reflective lot, war architects have even less inclination than ever before to consider whether their interventions have done more harm than good because of the glaring case of Afghanistan. After twenty years of attempting to eradicate the Taliban, the U.S. military finally retreated in 2021, leaving the Islamic Emirate of Afghanistan (as they now refer to themselves) in power, just as they were in 2001. By focusing on how slick and “neat” the latest and greatest implements of techno-homicide are, those who craft U.S. military policy can divert attention from their abject incompetence at actually winning a war or protecting, rather than annihilating, innocent people.

Keep reading

Japan: AI Systems Can Use Any Data, from Any Source – Even Illegal Ones

While other countries are mulling where to put the brakes on AI development, Japan is going full steam ahead, with the government recently announcing that no data will be off-limits for AI.

In a recent meeting, Keiko Nagaoka, Japanese Minister of Education, Culture, Sports, Science, and Technology, confirmed that no law, including copyright law, will prevent AIs from accessing data in the country.

AIs will be allowed to use data for training, “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise,” said Nagaoka.

The decision is a blow to copyright holders who argue that AI using their intellectual property to produce new content undermines the very concept of copyright. The issue has already emerged in the west — an AI-generated song using the voice of Drake and The Weeknd went viral on streaming services in April, before being swiftly removed.

In the west, much of the discourse around AI is focused on potential harms. AI leaders recently warning governments that development of the technology carries with it a “risk of extinction,” while news companies worry about deepfakes and “misinformation.”

The Biden Administration’s leftist regulators at the FTC, meanwhile, worry that “historically biased” data (such as crime data with racial imbalances) will lead to outcomes that conflict with “civil rights.” Many leftist agitators in the west want to cut off AIs from such data.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

‘Godfather of AI’ quits Google — and says he regrets life’s work due to risks to humanity

A prominent artificial intelligence researcher known as the “Godfather of AI” has quit his job at Google – and says he now partly regrets his work advancing the burgeoning technology because of the risks it poses to society.

Dr. Geoffrey Hinton is a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as OpenAI’s ChatGPT and other advanced systems.

The 75-year-old told the New York Times that he left Google so that he can speak openly about the risks of unrestrained AI development – including the spread of misinformation, upheaval in the jobs market and other, more nefarious possibilities.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said in an interview published on Monday.

“Look at how it was five years ago and how it is now,” Hinton added later in the interview. “Take the difference and propagate it forwards. That’s scary.”

Hinton fears that AI will only become more dangerous in the future — with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent.

Hinton informed Google of his plans to resign last month and personally spoke last Thursday with company CEO Sundar Pichai, according to the report. The computer scientist did not reveal what he and Pichai discussed during the phone call.

Keep reading

LIE DETECTOR FIRM LOBBIES CIA, DOD ON AUTOMATED EYE-SCANNING TECH

April 7 2023, 10:20 a.m.

A UTAH-BASED OUTFIT overseen by a former CIA consultant has spent hundreds of thousands of dollars lobbying intelligence and defense agencies, including the CIA and DHS, to adopt its automated lie detection technology, public lobbying disclosures reviewed by The Intercept show. Converus, Inc., boasts on its website that its technology has already been used for job screenings at American law enforcement agencies, corporate compliance and loss prevention in Latin America, and document verification in Ukraine. The company’s management team includes chief scientist John Kircher, a former consultant for the CIA and Department of Defense; Todd Mickelson, former director of product management at Ancestry.com; and Russ Warner, former CEO of the content moderation firm ContentWatch.

Warner told The Intercept that lobbying efforts have focused on changing federal regulations to allow the use of technologies other than the polygraph for lie detection. “The Department of Defense National Center of Credibility Assessment (NCCA) is in charge of oversight of validation and pilot projects throughout the U.S. government of new deception detection technologies,” Warner wrote in an email. “DoD Directive 5210.91 and ODNI Security Agent Directive 2 currently prohibit the use of any credibility assessment solution other than polygraph. For this reason, we have contacted government agencies to consider the use of EyeDetect and other new technologies.”

After finding success in corporate applications and sheriff’s offices, Converus has set its sights on large federal agencies that could apply its EyeDetect technology to a host of uses, including employee clearance screenings and border security. Unlike a polygraph, a device which relies on an operator asking questions and measuring physiological responses like heart rate and perspiration, Converus’s technology measures “cognitive load” with an algorithm that processes eye movement.

Keep reading

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading

‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man’s widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. 

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. 

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. 

“Without Eliza, he would still be here,” she told the outlet.  

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. 

Keep reading

Critics Warn of ‘a Dragnet of Surveillance’ as U.S. Pushes Ahead With Plans for More ‘Smart’ Cities

U.S. Transportation Secretary Pete Buttigieg last week announced $94 million in grant awards to fund 59 smart city technology projects across the country.

Despite widespread and mounting pushback against biometric surveillance and control systems associated with smart city technologies and the failure of the U.S. Department of Transportation’s (DOT) previous attempt to grant-fund smart city transformation in Columbus, Ohio, Buttigieg told The Verge he thinks “smart city technologies matter more than ever.”

Cities just need to take a different approach — experimenting with and testing out different technologies first, rather than implementing a “grand unified system” all at once, Buttigieg said.

The new grants, part of the Strengthening Mobility and Revolutionizing Transportation (SMART) Grants Program, are the first round of $500 million in funding that will be awarded for smaller smart mobility projects over the next five years, authorized under the 2021 Bipartisan Infrastructure Law.

In this funding round, DOT awarded smart grants for a range of projects, including drone surveillance or delivery, smart traffic signals, connected vehicles, autonomous vehicles, smart grid development, intelligent sensors and other Internet of Things (IoT) infrastructure. Some cities, including Los Angeles (LA), received multiple grants.

Smart city development typically focuses on the implementation of technologies like the IoT, 5G, cloud and edge computing, and biometric surveillance to track, manage, control and extract profit from an array of urban processes.

Whitney Webb, an investigative journalist and smart cities critic, said the smart city infrastructure is meant to facilitate the development of cities “micromanaged by technocrats via an all-encompassing system of mass surveillance and a vast array of ‘internet of things’ devices that provide a constant and massive stream of data that is analyzed by artificial intelligence (AI).”

Keep reading