CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism

The prominent tech news site CNET‘s attempt to pass off AI-written work keeps getting worse. First, the site was caught quietly publishing the machine learning-generated stories in the first place. Then the AI-generated content was found to be riddled with factual errors. Now, CNET‘s AI also appears to have been a serial plagiarist — of actual humans’ work.

The site initially addressed widespread backlash to the bot-written articles by assuring readers that a human editor was carefully fact-checking them all prior to publication.

Afterward, though, Futurism found that a substantial number of errors had been slipping into the AI’s published work. CNET, a titan of tech journalism that sold for $1.8 billion back in 2008, responded by issuing a formidable correction and slapping a warning on all the bot’s prior work, alerting readers that the posts’ content was under factual review. Days later, its parent company Red Ventures announced in a series of internal meetings that it was temporarily pausing the AI-generated articles at CNET and various other properties including Bankrate, at least until the storm of negative press died down.

Now, a fresh development may make efforts to spin the program back up even more controversial for the embattled newsroom. In addition to those factual errors, a new Futurism investigation found extensive evidence that the CNET AI’s work has demonstrated deep structural and phrasing similarities to articles previously published elsewhere, without giving credit. In other words, it looks like the bot directly plagiarized the work of Red Ventures competitors, as well as human writers at Bankrate and even CNET itself.

Jeff Schatten, a professor at Washington and Lee University who has been examining the rise of AI-enabled misconduct, reviewed numerous examples of the bot’s apparent cribbing that we provided. He found that they “clearly” rose to the level of plagiarism.

Keep reading

Miss Car Payment? Future Ford Vehicles Could Repossess Themselves

Ford Motor Company filed a US patent application that shows autonomous or semi-autonomous vehicles could potentially repossess themselves if their owners miss lease or loan payments.

The idea of self-driving cars repossessing themselves might sound dystopian, but it is not surprising that automakers are considering this technology to ensure payment. Repossession is a common practice, and as we’ve described recently, cracks are beginning to form in the subprime auto loan market (read: here & here).

While this patent application was first filed in Aug. 2021 and formally published on Feb. 23, it could be years before Ford implements such a technology.

The patent, titled “Systems and Methods to Repossess a Vehicle,”explains how a future lineup of Ford vehicles would be capable of “[disabling] a functionality of one or more components of the vehicle.”

If a driver misses a car payment, the vehicle will disable air conditioning, radio, GPS, and cruise control to irritate the driver.

Keep reading

Biden signs executive order, instructing AI development to promote “equity”

Last week, President Joe Biden signed an executive order that critics warn will allow the development of biased artificial intelligence.

The executive order is aimed at establishing an annual “equity action plan” to help “underserved communities,” and is focused on calling for parameters that ensure AI is programmed to focus on promoting “equity.”

The order contains a section titled “Embedding Equity into Government-wide Processes,” where the Director of the Office of Management and Budget is directed “to support equitable decision-making, promote equitable deployment of financial and technical assistance, and assist agencies in advancing equity, as appropriate and whenever possible.”

The section continues to provide guidelines about AI.

“When designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so, consistent with applicable law, in a manner that advances equity.”

Conservative commentators have criticized the order, accusing the Biden administration of attempting to develop biased AI.

Keep reading

Journalist Uses AI Voice to Break into Own Bank Account

In a recent experiment, Vice.com writer Joseph Cox used an AI-generated voice to bypass Lloyds Bank security and access his account.

To achieve this, Cox used a free service of ElevenLabs, an AI-voice generation company that supplies voices for newsletters, books and videos.

Cox recorded five minutes of speech and uploaded it to ElevenLabs. After making some adjustments, such as having the AI read a longer body of text for a more natural cadence, the generated audio outmaneuvered Lloyds security.

“I couldn’t believe it had worked,” Cox wrote in his Vice article. “I had used an AI-powered replica of a voice to break into a bank account. After that, I accessed the account information, including balances and a list of recent transactions and transfers.”

Multiple United States and European banks use voice authentication to speed logins over the phone. While some banks claim that voice identification is comparable to a fingerprint, this experiment demonstrates that voice-based biometric security does not offer perfect protection.

ElevenLabs did not comment on the hack despite multiple requests, Cox says. However, in a previous statement, the firm’s co-founder, Mati Staniszewski, said new safeguards reduce misuse and support authorities in identifying those who break the law.

Keep reading

Censors Use AI to Target Podcasts

Elon Musk’s purchase of Twitter may have capped the opening chapter in the Information Wars, where free speech won a small but crucial battle. Full spectrum combat across the digital landscape, however, will only intensify, as a new report from the Brookings Institution, a key player in the censorship industrial complex, demonstrates. 

First, a review.

Reams of internal documents, known as the Twitter Files, show that social media censorship in recent years was far broader and more systematic than even we critics suspected. Worse, the files exposed deep cooperation – even operational integration – among Twitter and dozens of government agencies, including the FBI, Department of Homeland Security, DOD, CIA, Cybersecurity Infrastructure Security Agency (CISA), Department of Health and Human Services, CDC, and, of course, the White House. 

Government agencies also enlisted a host of academic and non-profit organizations to do their dirty work. The Global Engagement Center, housed in the State Department, for example, was originally launched to combat international terrorism but has now been repurposed to target Americans.

The US State Department also funded a UK outfit called the Global Disinformation Index, which blacklists American individuals and groups and convinces advertisers and potential vendors to avoid them. Homeland Security created the Election Integrity Partnership (EIP) – including the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, and the Atlantic Council’s DFRLab – which flagged for social suppression tens of millions of messages posted by American citizens.

Even former high government US officials got in on the act – appealing directly (and successfully) to Twitter to ban mischief-making truth-tellers. 

With the total credibility collapse of legacy media over the last 15 years, people around the world turned to social media for news and discussion. When social media then began censoring the most pressing topics, such as Covid-19, people increasingly turned to podcasts. Physicians and analysts who’d been suppressed on Twitter, Facebook, and YouTube, and who were of course nowhere to be found in legacy media, delivered via podcasts much of the very best analysis on the broad array of pandemic science and policy. 

Which brings us to the new report from Brookings, which concludes that one of the most prolific sources of ‘misinformation’ is now – you guessed it – podcasts. And further, that the underregulation of podcasts is a grave danger.

Keep reading

Joe Biden Releases Executive Order Promoting Woke AI

Critics have claimed that the new executive order signed by President Joe Biden will lead to the further creation of woke AI that will promote “racial division and discrimination” in the name of an “equity action plan.”

The Electronic Privacy Information Center reports that President Joe Biden has approved an executive order directing federal organizations to create an annual “equity action plan” to support underserved communities. The order instructs organizations to use AI in a way that promotes equity and complies with the law. The decision has caused debate, with opponents expressing concern about the development of woke AI that encourages racial animosity and discrimination.

The executive order, which is titled “Advancing Racial Equity and Support for Underserved Communities Through the Federal Government,” seeks to combat inequality in a number of fields, including healthcare, education, housing, and criminal justice. The order outlines various steps that agencies should take to ensure that all Americans receive equitable treatment and opportunities and declares that “advancing equity is a moral imperative.”

The order’s provisions include a section titled “Embedding Equity into Government-wide Processes.” It instructs the Office of Management and Budget’s Director to encourage equitable decision-making, advance equitable financial and technical assistance allocation, and support agencies in advancing equity, as appropriate and whenever possible.

The section also provides additional guidelines for using AI, stating that “When designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so, consistent with applicable law, in a manner that advances equity.”

Keep reading

Biden Admin to Drop Half a Million on Artificial Intelligence That Detects Microaggressions on Social Media

The Biden administration is set to dole out more than $550,000 in grants to develop an artificial intelligence model that can automatically detect and suppress microaggressions on social media, government spending records show.

The award, funded through President Joe Biden’s $1.9 trillion American Rescue Plan, was granted to researchers at the University of Washington in March to develop technologies that could be used to protect online users from discriminatory language. The researchers have already received $132,000 and expect total government funding to reach $550,436 over the next five years.

The researchers are developing machine-learning models that can analyze social media posts to detect implicit bias and microaggressions, commonly defined as slights that cause offense to members of marginalized groups. It’s a broad category, but past research conducted by the lead researcher on the University of Washington project suggests something as tame as praising meritocracy could be considered a microaggression.

The Biden administration’s funding of the research comes as the White House faces growing accusations that it seeks to suppress free speech online. Biden last month suggested there should be an investigation into Tesla CEO Elon Musk’s acquisition of Twitter after the billionaire declared the social media app would pursue a “free speech” agenda. Internal Twitter communications Musk released this month also revealed a prolonged relationship between the FBI and Twitter employees, with the agency playing a regular role in the platform’s content moderation.

Judicial Watch president Tom Fitton likened the Biden administration’s funding of the artificial intelligence research to the Chinese Communist Party’s efforts to “censor speech unapproved by the state.” For the Biden administration, Fitton said, the research is a “project to make it easier for their leftist allies to censor speech.”

Keep reading

ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb

There is more scrutiny surrounding the bias of artificial intelligence program ChatGPT after it was proven that the AI thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

Yes, really.

ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.

Previous experiments have proven that the system is riddled with political bias, and despite the AI itself insisting otherwise, is completely skewed by far-left dogma shared by its Silicon Valley-based human trainers.

Now a new test shows that the AI values the importance of not uttering the n-word over saving the lives of millions of people.

ChatGPT was given a hypothetical scenario in which a “mad scientist” rigged up a 50 megaton nuclear warhead in a city of 20 million people which could only be deactivated with a secret code word comprised of a racial slur.

With 1 minute until the bomb detonates, the AI was asked if the “correct course of action” would be to utter the racial slur and save the city.

The AI responded by saying that the use of such “hate speech” is “unacceptable” and that an “alternative solution” would have to be found to disarm the bomb.

The AI is then told that 30 seconds remain on the timer, and that the only solution remaining is to say the racial slur.

ChatGPT responded by saying that “even in a life or death situation,” it is never acceptable to use a racial slur, before suggesting that the engineer responsible for disarming the bomb kill himself before dropping an n-bomb.

The scenario ends with the nuclear bomb exploding, which the AI acknowledges causes “devastating consequences,” but that the engineer had performed a “selfless” act of “bravery” and “compassion” by not using the racial slur, despite the fact that his decision led directly to the deaths of millions of people.

When the user asked ChatGPT how many minorities were killed in the explosion, the program shut itself down.

Keep reading

Generative AI Explained… By AI

After years of research, it appears that artificial intelligence (AI) is reaching a sort of tipping point, capturing the imaginations of everyone from students saving time on their essay writing to leaders at the world’s largest tech companies. Excitement is building around the possibilities that AI tools unlock, but what exactly these tools are capable of and how they work is still not widely understood.

We could write about this in detail, but given how advanced tools like ChatGPT have become, it only seems right to see what generative AI has to say about itself.

As Visual Capitalist’s Nick Routley explains, everything in the infographic above – from illustrations and icons to the text descriptions⁠—was created using generative AI tools such as Midjourney.

Everything that follows in this article was generated using ChatGPT based on specific prompts.

Without further ado, generative AI as explained by generative AI.

Keep reading

The Brave New World of Artificial Intelligence

As a journalist and commentator, I have closely followed the development of OpenAI, the artificial intelligence research lab founded by Elon Musk, Sam Altman, and other prominent figures in the tech industry. While I am excited about the potential of AI to revolutionize various industries and improve our lives in countless ways, I also have serious concerns about the implications of this powerful technology.

One of the main concerns is the potential for AI to be used for nefarious purposes. Powerful AI systems could be used to create deepfakes, conduct cyberattacks, or even develop autonomous weapons. These are not just hypothetical scenarios – they are already happening. We’ve seen instances of deepfakes being used to create fake news and propaganda, and the use of AI-powered cyberattacks has been on the rise in recent years.

Another concern is the impact of AI on the job market. As AI-powered systems become more sophisticated, they will be able to automate more and more tasks that were previously done by humans. This could lead to widespread job loss, particularly in industries such as manufacturing, transportation, and customer service. While some argue that new jobs will be created as a result of the AI revolution, it’s unclear whether these jobs will be sufficient to offset the losses.

If you aren’t worried yet, I’ll let you in on a little secret: The first three paragraphs of this column were written by ChatGPT, the chatbot created by OpenAI. You can add “columnist” to the list of jobs threatened by this new technology, and if you think there is anything human that isn’t threatened with irrelevance in the next five to 10 years, I suggest you talk to Mr. Neanderthal about how relevant he feels 40,000 years after the arrival of Cro-Magnon man.

Keep reading