Lab Administers A.I.-Designed Drug to First Patient

Hong Kong- and New York-based Insilico Medicine on Tuesday announced a drug for treating idiopathic pulmonary fibrosis (IPF) designed by generative artificial intelligence (A.I.) has advanced to Phase 2 clinical trials, which means the drug has been administered to its first human patient.

IPF is a chronic lung disease that makes it more difficult to breathe, starving the body of much-needed oxygen. IPF is currently regarded as incurable, but treatable. 

“Generative A.I.” is the level of artificial intelligence that can accept fairly broad commands from a human user and create a complex finished product. Such A.I. systems grow more powerful and useful as they “learn” by accumulating information. DALL-E, the computer art program that can fulfill instructions like “Show me what the Peanuts characters would look like if Picasso drew them” is a popular example.

Creating a new medicine is a daunting task. The design stage includes a great deal of labor-intensive research that could hopefully be completed more quickly by A.I.

Keep reading

AI Artist Creates Satanic Panic About Hobby Lobby

People on social media are sharing pictures of what they think are Satanic-seeming displays from Hobby Lobby stores and vowing never to shop there again much like many people refuse to drink Bud Light or shop at Target for bigoted reasons. Aside from the fact that Americans are currently eager to boycott any company that feigns tolerance at marginalized people, there’s one big problem with these Hobby Lobby store pictures: They’re not real. 

These pictures of Satanic merchandise on the shelves of Hobby Lobby were made by Jennifer Vinyard using the AI image generating tool Midjourney. That didn’t stop people from credulously sharing the photos on Facebook and TikTok as if they were real and expressing their shock and horror that Hobby Lobby, which bills itself as a Christian company, was selling giant statues of Baphomet.

Vinyard, an Austin-area pharmacist in training, generated the pictures with Midjourney and posted them to her personal Facebook, Reddit, and an AI art group on Facebook on June 5. The public post in AI Art Universe went viral and, as of this writing, has been shared more than six thousand times. The post gained more than 100 comments before the page shut them down.

Keep reading

The world’s top H.P. Lovecraft expert weighs in on a monstrous viral meme in the A.I. world

Artificial intelligence is scary to a lot of people, even within the tech world. Just look at how industry insiders have co-opted a tentacled monster called a shoggoth as a semi-tongue-in-cheek symbol for their rapidly advancing work.

But their online memes and references to that creature — which originated in influential late author H.P. Lovecraft’s novella “At the Mountains of Madness” — aren’t quite perfect, according to the world’s leading Lovecraft scholar, S.T. Joshi.

If anyone knows Lovecraft and his wretched menagerie, which includes the ever-popular Cthulhu, it’s Joshi. He’s edited reams of Lovecraft collections, contributed scores of essays about the author and written more than a dozen books about him, including the monumental two-part biography “I Am Providence.”

So, after The New York Times recently published a piece from tech columnist Kevin Roose explaining that the shoggoth had caught on as “the most important meme in A.I.,” CNBC reached out to Joshi to get his take — and find out what he thought Lovecraft would say about the squirmy homage from the tech world.

“While I’m sure Lovecraft would be grateful (and amused) by the application of his creation to AI, the parallels are not very exact,” Joshi wrote. “Or, I should say, it appears that AI creators aren’t entirely accurate in their understanding of the shoggoth.”

Keep reading

Philip K. Dick predicted ChatGPT and its grim ramifications

Philip K. Dick had some strange ideas about the future. In his 40-plus novels and 121 short stories, the science fiction author imagined everything from “mood organs” which allow users to dial up an emotional state including “the desire to watch TV, no matter what’s on” to pay-per-use doors that refuse entrance or exit without sufficient coinage. Characters in Dick’s mind-bending novel “Ubik” (published in 1969 and set in 1992) include a psionic talent scout named G.G. Ashwood, who wears “natty birch-bark pantaloons, hemp-rope belt, peekaboo see-through top and train engineer’s tall hat” and a taxi driver wearing “fuchsia pedal pushers, pink yak fur slippers, a snakeskin sleeveless blouse, and a ribbon in his waist-length dyed white hair.”

But our weird present is looking increasingly like a Philip K. Dick future. While we may not have Deckard’s flying car from “Blade Runner” (adapted from Dick’s novel, “Do Androids Dream of Electric Sheep?”), many of the author’s predictions have materialized in real life after appearing on the big screen, including Dick’s fondness for robo-taxis (who can forget the lovable Johnnycab from 1990’s “Total Recall”?), and the predictive policing Dick called “pre-crime” in his 1956 story, “Minority Report,” which hit the big screens in 2002 with the help of Steven Spielberg and Tom Cruise.  And now it looks like Philip K. Dick may also have predicted ChatGPT. 

Keep reading

John F Kennedy murder mystery could be solved thanks to new AI technology

AI could finally help solve the mystery of a “second shooter” in the assassination of US President John F Kennedy almost 60 years ago.

Experts believe Artificial Intelligence, combined with new advances in digital image processing, will either prove or disprove beyond all doubt whether another gunman took aim at JFK in Dallas on November 22, 1963.

The fresh evidence is contained in a little-known home movie shot by local maintenance man Orville Nix, whose descendants have launched a legal bid to get it back from the US government.

His clip – unlike the famous one shot by Abraham Zapruder that has been seen by millions – was filmed from the centre of Dealey Plaza as Kennedy was hit in the head while he and wife Jackie waved at crowds from the back of his open-top limousine.

As a result, it provides the only known unobstructed view of the “grassy knoll” where conspiracy theorists have long claimed another sniper – or snipers – were concealed.

Nix’s film was last examined in 1978 by photo experts hired by the US’s House Select Committee on Assassinations.

Based in part on that analysis, the panel sensationally concluded JFK “was probably assassinated as the result of a conspiracy” and that “two gunmen” likely fired on him.

But the technology of the time left experts in doubt about whether the home movie actually proves this – and the original film subsequently “vanished from view”, with only imperfect copies remaining in the hands of government officials.

Keep reading

Turn Off, Don’t Automate, the Killing Machine

The quest to develop and refine technologically advanced means to commit mass homicide continues on, with Pentagon tacticians ever eager to make the military leaner and more lethal. Drone swarms already exist, and as insect-facsimile drones are marketed and produced, we can expect bug drone swarms to appear soon in the skies above places where suspected “bad guys” are said to reside—along with their families and neighbors. Following the usual trajectory, it is only a matter of time before surveillance bug drones are “upgraded” for combat, making it easier than ever to kill human beings by whoever wishes to do so, whether military personnel, factional terrorists, or apolitical criminals. The development of increasingly lethal and “creative” means to commit homicide forges ahead not because anyone needs it but because it is generously funded by the U.S. Congress under the assumption that anything labeled a tool of “national defense” is, by definition, good.

To some there may seem to be merits to the argument from necessity for drones, given the ongoing military recruitment crisis. There are many good reasons why people wish not to enlist in the military anymore, but rather than review the missteps taken and counterproductive measures implemented in the name of defense throughout the twenty-first century, administrators ignore the most obvious answer to the question why young people are less enthusiastic than ever before to sign their lives away. Why did the Global War on Terror spread from Afghanistan and Iraq to engulf other countries as well? Critics have offered persuasive answers to this question, above all, that killing, torturing, maiming, and terrorizing innocent people led to an outpouring of sympathy for groups willing to resist the invaders of their lands. As a direct consequence of U.S. military intervention, Al Qaeda franchises such as ISIS emerged, proliferated, and spread. Yet the military plows ahead undeterred in its professed mission to eliminate “the bad guys,” with the killers either oblivious or somehow unaware that they are the primary creators of “the bad guys.”

Meanwhile, the logic of automation has been openly and enthusiastically embraced as the way of the future for the military, as in so many other realms. Who needs soldiers anyway, given that they can and will be replaced by machines? Just as grocery stores today often have more self-checkout stations than human cashiers, the military has been replacing combat pilots with drone operators for years. Taking human beings altogether out of the killing loop is the inevitable next step, because war architects focus on lethality, as though it were the only measure of military success. Removing “the human factor” from warfare will increase lethality and may decrease, if not eliminate, problems such as PTSD. But at what price?

Never a very self-reflective lot, war architects have even less inclination than ever before to consider whether their interventions have done more harm than good because of the glaring case of Afghanistan. After twenty years of attempting to eradicate the Taliban, the U.S. military finally retreated in 2021, leaving the Islamic Emirate of Afghanistan (as they now refer to themselves) in power, just as they were in 2001. By focusing on how slick and “neat” the latest and greatest implements of techno-homicide are, those who craft U.S. military policy can divert attention from their abject incompetence at actually winning a war or protecting, rather than annihilating, innocent people.

Keep reading

Japan: AI Systems Can Use Any Data, from Any Source – Even Illegal Ones

While other countries are mulling where to put the brakes on AI development, Japan is going full steam ahead, with the government recently announcing that no data will be off-limits for AI.

In a recent meeting, Keiko Nagaoka, Japanese Minister of Education, Culture, Sports, Science, and Technology, confirmed that no law, including copyright law, will prevent AIs from accessing data in the country.

AIs will be allowed to use data for training, “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise,” said Nagaoka.

The decision is a blow to copyright holders who argue that AI using their intellectual property to produce new content undermines the very concept of copyright. The issue has already emerged in the west — an AI-generated song using the voice of Drake and The Weeknd went viral on streaming services in April, before being swiftly removed.

In the west, much of the discourse around AI is focused on potential harms. AI leaders recently warning governments that development of the technology carries with it a “risk of extinction,” while news companies worry about deepfakes and “misinformation.”

The Biden Administration’s leftist regulators at the FTC, meanwhile, worry that “historically biased” data (such as crime data with racial imbalances) will lead to outcomes that conflict with “civil rights.” Many leftist agitators in the west want to cut off AIs from such data.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

‘Godfather of AI’ quits Google — and says he regrets life’s work due to risks to humanity

A prominent artificial intelligence researcher known as the “Godfather of AI” has quit his job at Google – and says he now partly regrets his work advancing the burgeoning technology because of the risks it poses to society.

Dr. Geoffrey Hinton is a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as OpenAI’s ChatGPT and other advanced systems.

The 75-year-old told the New York Times that he left Google so that he can speak openly about the risks of unrestrained AI development – including the spread of misinformation, upheaval in the jobs market and other, more nefarious possibilities.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said in an interview published on Monday.

“Look at how it was five years ago and how it is now,” Hinton added later in the interview. “Take the difference and propagate it forwards. That’s scary.”

Hinton fears that AI will only become more dangerous in the future — with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent.

Hinton informed Google of his plans to resign last month and personally spoke last Thursday with company CEO Sundar Pichai, according to the report. The computer scientist did not reveal what he and Pichai discussed during the phone call.

Keep reading

LIE DETECTOR FIRM LOBBIES CIA, DOD ON AUTOMATED EYE-SCANNING TECH

April 7 2023, 10:20 a.m.

A UTAH-BASED OUTFIT overseen by a former CIA consultant has spent hundreds of thousands of dollars lobbying intelligence and defense agencies, including the CIA and DHS, to adopt its automated lie detection technology, public lobbying disclosures reviewed by The Intercept show. Converus, Inc., boasts on its website that its technology has already been used for job screenings at American law enforcement agencies, corporate compliance and loss prevention in Latin America, and document verification in Ukraine. The company’s management team includes chief scientist John Kircher, a former consultant for the CIA and Department of Defense; Todd Mickelson, former director of product management at Ancestry.com; and Russ Warner, former CEO of the content moderation firm ContentWatch.

Warner told The Intercept that lobbying efforts have focused on changing federal regulations to allow the use of technologies other than the polygraph for lie detection. “The Department of Defense National Center of Credibility Assessment (NCCA) is in charge of oversight of validation and pilot projects throughout the U.S. government of new deception detection technologies,” Warner wrote in an email. “DoD Directive 5210.91 and ODNI Security Agent Directive 2 currently prohibit the use of any credibility assessment solution other than polygraph. For this reason, we have contacted government agencies to consider the use of EyeDetect and other new technologies.”

After finding success in corporate applications and sheriff’s offices, Converus has set its sights on large federal agencies that could apply its EyeDetect technology to a host of uses, including employee clearance screenings and border security. Unlike a polygraph, a device which relies on an operator asking questions and measuring physiological responses like heart rate and perspiration, Converus’s technology measures “cognitive load” with an algorithm that processes eye movement.

Keep reading