‘I actually had a conversation with Dad’: The people using AI to bring back dead relatives – including a plan to harvest DNA from graves to build new clone bodies

Can artificial intelligence really summon dead relatives back from beyond the grave?

A growing number of people are trying to find out, with pioneers such as inventor and futurist Ray Kurzweil using artificial intelligence to recreate lost relatives.

Kurzweil’s attempts to ‘bring back’ his father – who died when Kurzweil was 22 – using AI began more than 10 years ago and are chronicled this year in a comic book by Kurzweil’s daughter Amy.

Kurzweil created a ‘replicant’ of his father by feeding an artificial intelligence system with his father’s letters, essays and musical compositions.

He now has even more ambitious plans to bring his father back to life using nanotechnology and DNA from his father’s buried bones.

Keep reading

A.I. Agents: ChatGPT Can Write Its Own Code And Execute It

The widely used chatbot ChatGPT was designed to generate digital text, everything from poetry to term papers to computer programs. But when a team of artificial intelligence researchers at the computer chip company Nvidia got their hands on the chatbot’s underlying technology, they realized it could do a lot more.

Within weeks, they taught it to play Minecraft, one of the world’s most popular video games. Inside Minecraft’s digital universe, it learned to swim, gather plants, hunt pigs, mine gold and build houses.

“It can go into the Minecraft world and explore by itself and collect materials by itself and get better and better at all kinds of skills,” said a Nvidia senior research scientist, Linxi Fan, who is known as Jim.

The project was an early sign that the world’s leading artificial intelligence researchers are transforming chatbots into a new kind of autonomous system called an A.I. agent. These agents can do more than chat. They can use software apps, websites and other online tools, including spreadsheets, online calendars, travel sites and more.

In time, many researchers say, the A.I. agents could become far more sophisticated, and could replace office workers, automating almost any white-collar job.

Keep reading

How Peter Thiel-Linked Tech is Fueling the Ukraine War

“A reluctance to grapple with the often grim reality of an ongoing geopolitical struggle for power poses its own danger. Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.

This is an arms race of a different kind, and it has begun.”– Alex Karp, Palantir CEO

These were the recent words of Palantir CEO Alex Karp, proclaiming in the New York Times that the world has entered a new era of warfare with the rapid acceleration of Artificial Intelligence (AI) technologies. Playing on the recent release of the “Oppenheimer” movie by comparing the dawn of AI with the development of the atomic bomb, Karp argued that the growing role of AI in weapons systems has become “our Oppenheimer moment.”

In his op-ed, Karp states bluntly that this era is a new kind of arms race where inaction equals defeat, positing that a “more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two” is required if the West is to maintain a long-term edge over its adversaries. 

Karp’s words are timely within the context of the ongoing conflict in Ukraine, which – from the beginning – has been a tech-fueled war, as well as a catalyst for further blurring the lines between nation states and the companies that own and operate such technologies. From Microsoft “literally mov[ing] the government and much of the country of Ukraine from on-premises servers to [its] cloud,” to Boston Dynamics’ robot dog, Spot, sweeping mines on the battlefield, as I recently reported for Unlimited Hangout, “much of Ukraine’s war effort, save for the actual dying, has been usurped by the private sector.”

But, as Karp’s words suggest, the longer the conflict goes on, the more technologically advanced the weapons, and weapons operating systems and software behind them, will become. Indeed, the US Military is testing Large-Language Models’ (LLMs) capacity to perform military tasks and exercises, including completing once days-long information requests in minutes as well as  extensive crisis response planning. Ukrainian Minister of Digital Transformation Mykhailo Fedorov, who commands Ukraine’s “Army of Drones” program in a made-for-film collaboration with Star Wars actor Mark Hamill, even recently proclaimed that the proliferation of fully autonomous, lethal drones are “a logical and inevitable next step” in warfare and weapons development. 

Indeed, AI tech and other major technologies are coming to the forefront of the war’s front lines. For instance, “kamikaze” naval drones equipped with explosives dealt heavy damage to the Crimean bridge in July, with the Washington Post also reporting that over 200 Ukrainian companies involved in drone production are working with Ukrainian military units to “tweak and augment drones to improve their ability to kill and spy on the enemy.”

As the conflict continues, corporations and controversial defense contractors, like data firm and effective CIA-front Palantir, defense contractor Anduril, and facial recognition service Clearview AI are taking advantage of the conflict to develop controversial AI-driven weapons systems and facial recognition technologies, perhaps transforming both warfare and AI forever.  

Critically, these organizations all receive support from PayPal co-founder and early Facebook investor Peter Thiel, a prominent, yet controversial venture capitalist intimately involved in the start-up and expansion of a bevy of today’s prominent tech corporations and adjacent organizations whose work, often co-developed or otherwise advanced by governments and the intelligence community, includes bolstering the State’s mass surveillance and data-collection and -synthesis capacities despite his professed libertarian political beliefs.

As such, these Thiel-backed groups’ involvement in war serves to develop not only problematic and unpredictable weapons technologies and systems, but also apparently to advance and further interconnect a larger surveillance apparatus formed by Thiel and his elite allies’ collective efforts across the public and private sectors, which arguably amount to the entrenchment of a growing technocratic panopticon aimed at capturing public and private life. Within the context of Thiel’s growing domination over large swaths of the tech industry, apparent efforts to influence, bypass or otherwise undermine modern policymaking processes, and anti-democratic sentiments, Thiel-linked organizations’ activities in Ukraine can only signal a willingness to shape the course of current events and the affairs of sovereign nations alike. It also heralds the unsettling possibility that this tech, currently being honed in Ukraine’s battlefields, will later be implemented domestically.

In other words, a high-stakes conflict, where victory comes before ethical considerations, facilitates the perfect opportunity for Silicon Valley and the larger US military industrial complex to publicly deepen their relationship and strive towards shared goals in wartime and beyond.

Keep reading

Amazon’s Alexa has been claiming the 2020 election was stolen

Amid concerns the rise of artificial intelligence will supercharge the spread of misinformation comes a wild fabrication from a more prosaic source: Amazon’s Alexa, which declared that the 2020 presidential election was stolen.

Asked about fraud in the race — in which President Biden defeated former president Donald Trump with 306 electoral college votes — the popular voice assistant said it was “stolen by a massive amount of election fraud,” citing Rumble, a video-streaming service favored by conservatives.

The 2020 races were “notorious for many incidents of irregularities and indications pointing to electoral fraud taking place in major metro centers,” according to Alexa, referencing Substack, a subscription newsletter service. Alexa contended that Trump won Pennsylvania, citing “an Alexa answers contributor.”

Multiple investigations into the 2020 election have revealed no evidence of fraud, and Trump faces federal criminal charges connected to his efforts to overturn the election. Yet Alexa disseminates misinformation about the race, even as parent company Amazon promotes the tool as a reliable election news source to more than 70 million estimated users.

Amazon declined to explain why its voice assistant draws 2020 election answers from unvetted sources.

Keep reading

Canada Plots to Increase Online Regulation, Target Search and Social Media Algorithms

Canada is taking steps towards potentially intrusive regulation of artificial intelligence as it pertains to its application in search and social media services. The government’s intentions have been revealed, which includes AI application way beyond the realm of generative AI similar to OpenAI’s ChatGPT. Industry giants such as Google and Facebook, who utilize AI for search results, translation provisions, and customer taste recognition respectively, are among the contenders lined up in the regulatory intent with the pro-censorship government intent on having a say on how these algorithms work.

The information comes by way of Minister François-Philippe Champagne of Innovation, Science and Economic Development Canada (ISED) in a letter submitted to the Industry committee analyzing Bill C-27—the privacy reform and AI regulation bill. Precise amendments remain shielded from scrutiny, however, as the governmental body keeps the proposed changes under wraps.

We obtained a copy of the original bill for you here.

The existing framework in Bill C-27 leaves the identification of AI mechanisms that can be classified into the “high-impact” category to future regulatory proceedings.

Bill C-27, by treating search and social media results as “high-impact” systems, is likely to raise eyebrows as the government’s push towards regulating technology has so far been assertive of greater control over content and therefore speech.

Non-compliance, under this proposal, may invite penalties proportional to 3% of gross global revenues.

The legislation veers into controversial territory by infusing the regulation of content moderation and discoverability prioritization into the matrix, in unexpected ways. It attempts to parallel these issues to bias accusation during recruitment or when used by law enforcement, invoking substantial surprise. Consequently, Canada’s rules, although they claim to align more closely with the EU, seem to set the country apart, leaning more towards censorship and less towards free speech.

The news comes on the back of Canada’s more recent online regulations that have raised alarm.

Keep reading

The Great AI Invasion: Given Enough Time, Artificial Intelligence Would Take Over Every Area Of Our Lives

Artificial intelligence is changing our world at a pace that is absolutely breathtaking.  If you would have asked me a decade ago if I would live to see artificial intelligence create a world class piece of art or a full-length feature film, I would have said no way.  But now those are simple tasks for artificial intelligence to accomplish.  So what is going to happen once AI becomes millions of times smarter and millions of times more powerful than it is today?  Given enough time, AI would take over every area of our lives.  Our world is definitely crazy right now, but fifty years from now it would resemble something out of an extremely bizarre science fiction novel if AI is allowed to continue to develop at an exponential rate.

Unfortunately, only a very small minority of the population is even concerned about the potential dangers posed by AI, and that is a problem.

Needless to say, the growth of AI has enormous implications for our economy.

AI can already perform most simple tasks much better and much faster than human workers can, and multiple studies have concluded that millions of jobs are at risk of being lost.  The following comes from Fox News

For example, in March 2023, technology firm OpenAI released a report that found at least 80% of the U.S. labor force could have at least 10% of their work-related tasks affected by the introduction of GPT, while another 19% of employees may see at least 50% of these work-related tasks impacted. While GPT influence impacts all wage levels, the higher-income jobs potentially face the greatest exposure, concludes OpenAI.

Also in March 2023, researchers at investment banker Goldman Sachs, after collecting data on occupationally-oriented tasks in Europe and the U.S., found that roughly two-thirds of current occupations are exposed to varying degrees of generative AI automation (such as found in ChatGPT), and that AI could substitute for nearly one-fourth of current work performed.

In July 2023, the McKinsey Global Institute issued a report estimating that without generative AI, automation could take over tasks accounting for 21.5% of the hours worked in the U,S. economy by 2030; but with generative AI, that share increased to 29.5%.

So what would happen to all of the workers that would no longer be needed once AI starts taking over most of our jobs?

I think that is a question that all of us should be asking.

Artificial intelligence also threatens to transform our personal relationships.

Keep reading

Artificial Intelligence Goes to War

Uh… gulp… you thought it was bad when that experienced pilot ejected from one of the Air Force’s hottest “new” planes, the F-35 combat fighter, near — no, not China or somewhere in the Middle East — but Charleston, South Carolina. The plane then flew on its own for another 60 miles before crashing into an empty field. And that was without an enemy in sight.

Perhaps we should just be happy that an F-35 ever even made it into the air, given its endless problems in these years. After all, as Dan Grazier of the Center for Defense Information wrote, it’s now “the largest and most expensive weapons program in history.” Yet when it comes to something as significant as “mission availability,” according to the Congressional Budget Office, only about 26% of all F-35s, each of which now costs an estimated $80 million to produce and $44,000 an hour to fly, are available at any moment. Not exactly thrilling, all in all.

Keep reading

Washington U. Prof: AI Girlfriends Are Ruining a Generation of Men

The rise of AI girlfriends is ruining an entire generation of young men by fostering a silent epidemic of loneliness, according to Washington University Professor of Data Science Liberty Vittert.

There are now apps that offer virtual girlfriends for men who want an AI lover to talk to them, allow them to live out their sexual fantasies, and learn, through data, exactly what they like, according to a op-ed written by Washington U. professor Liberty Vittert and published by the Hill.

These apps reportedly have millions of users, who are able to choose the physical attributes and personalities of their virtual girlfriends.

Some of the artificial lovers are even based on real people. One online influencer, for example, created an AI bot of herself and gained over 1,000 users in less than a week. She believes the AI girlfriend version of herself can generate $5 million a month.

Keep reading

How the “Surveillance AI Pipeline” Literally Objectifies Human Beings

The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found. Crucially, the study found that computer vision papers often refer to human beings as “objects,” a convention that both obfuscates how common surveillance of humans is in the field, and objectifies humans by definition.

“The studies presented in this paper ultimately reveal that the field of computer vision is not merely a neutral pursuit of knowledge; it is a foundational layer for a paradigm of surveillance,” the study’s authors wrote. The study, which has not been peer-reviewed yet, describes what the researchers call “The Surveillance AI Pipeline,” which is also the title of the paper.

The study’s lead author Pratyusha Ria Kalluri told 404 Media on a call that she and her co-authors manually annotated 100 computer vision papers and 100 patents that cited those papers. During this process, the study found that 90 percent of the papers and patents extracted data about humans, and 68 percent reported that they specifically enable extracting data about human bodies and body parts. Only 1 percent of the papers and patents stated they target only non-humans.

Keep reading

NASA to Use Artificial Intelligence to Better Track and Monitor UFO’s

As inklings of extraterrestrial life continue to make headlines, the National Aeronautics and Space Administration (NASA) will begin to use advancements in artificial intelligence to better monitor the skies in the hopes that non-human eyes may help them understand unidentified flying object (UFO) sightings and other events that may indicate a non-human presence.

NASA said that artificial intelligence (AI) will be “essential” in fully understanding the data surrounding unidentified anomalous phenomena and their origins in talks that followed the release of their highly anticipated UFO report.

The report did not conclude one way or the other whether NASA believes UFO’s are of extraterrestrial origin, but in a press briefing on September 14 the Administrator of NASA emphasized that the agency would continue to use all the resources at its disposal to prove or disprove that the unidentified objects showing up all over American military radar and otherwise baffling the world’s best scientists are of extraterrestrial origin. These resources now include AI programs that can comb through very large datasets for information a human might miss or take much longer to find.

“We will use AI and machine learning to search the skies for anomalies… and will continue to search the heavens for habitable reality,” NASA Administrator Bill Nelson said. “AI is just coming on the scene to be explored in all areas, so why should we limit any technological tool in analyzing, using data that we have?”

NASA administrators emphasized both in the report and press briefing that data surrounding unidentified anomalous phenomenas (UAP’s) and UFO’s is often very hard to analyze or quantify partly because of the nature of the topic and partly because it’s a very large swath of data. By using new tools made possible by artificial intelligence, NASA believes they can find patterns or anomalies in data that humans have thus far been unable to find.

Keep reading