After years of research, it appears that artificial intelligence (AI) is reaching a sort of tipping point, capturing the imaginations of everyone from students saving time on their essay writing to leaders at the world’s largest tech companies. Excitement is building around the possibilities that AI tools unlock, but what exactly these tools are capable of and how they work is still not widely understood.
We could write about this in detail, but given how advanced tools like ChatGPT have become, it only seems right to see what generative AI has to say about itself.
As Visual Capitalist’s Nick Routley explains, everything in the infographic above – from illustrations and icons to the text descriptions—was created using generative AI tools such as Midjourney.
Everything that follows in this article was generated using ChatGPT based on specific prompts.
Without further ado, generative AI as explained by generative AI.
As a journalist and commentator, I have closely followed the development of OpenAI, the artificial intelligence research lab founded by Elon Musk, Sam Altman, and other prominent figures in the tech industry. While I am excited about the potential of AI to revolutionize various industries and improve our lives in countless ways, I also have serious concerns about the implications of this powerful technology.
One of the main concerns is the potential for AI to be used for nefarious purposes. Powerful AI systems could be used to create deepfakes, conduct cyberattacks, or even develop autonomous weapons. These are not just hypothetical scenarios – they are already happening. We’ve seen instances of deepfakes being used to create fake news and propaganda, and the use of AI-powered cyberattacks has been on the rise in recent years.
Another concern is the impact of AI on the job market. As AI-powered systems become more sophisticated, they will be able to automate more and more tasks that were previously done by humans. This could lead to widespread job loss, particularly in industries such as manufacturing, transportation, and customer service. While some argue that new jobs will be created as a result of the AI revolution, it’s unclear whether these jobs will be sufficient to offset the losses.
If you aren’t worried yet, I’ll let you in on a little secret: The first three paragraphs of this column were written by ChatGPT, the chatbot created by OpenAI. You can add “columnist” to the list of jobs threatened by this new technology, and if you think there is anything human that isn’t threatened with irrelevance in the next five to 10 years, I suggest you talk to Mr. Neanderthal about how relevant he feels 40,000 years after the arrival of Cro-Magnon man.
The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.
The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.
DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago.
The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms.
DARPA set four specific goals for the program:
- Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
- Recognize persuasion campaign structures and influence operations across social media sites and communities.
- Identify participants and intent, and measure effects of persuasion campaigns.
- Counter messaging of detected adversary influence operations.
NASA and IBM have launched a new collaboration to utilize Artificial Intelligence (AI) in the study of scientific data about the Earth and its environment, the US space agency announced in a press release on Wednesday.
“A collaboration between NASA and IBM will use artificial intelligence technology developed by IBM to discover insights in NASA Earth science data,” the release said. “This joint undertaking will be a new application of AI foundational model technology to NASA Earth observation satellite data.”
The project will seek to extract a greater understanding of the patterns and likely projections to be made from the data than was previously possible, the release said.
Since 2016, social workers in a Pennsylvania county have relied on an algorithm to help them determine which child welfare calls warrant further investigation. Now, the Justice Department is reportedly scrutinizing the controversial family-screening tool over concerns that using the algorithm may be violating the Americans with Disabilities Act by allegedly discriminating against families with disabilities, the Associated Press reported, including families with mental health issues.
Three anonymous sources broke their confidentiality agreements with the Justice Department, confirming to AP that civil rights attorneys have been fielding complaints since last fall and have grown increasingly concerned about alleged biases built into the Allegheny County Family Screening Tool. While the full scope of the Justice Department’s alleged scrutiny is currently unknown, the Civil Rights Division is seemingly interested in learning more about how using the data-driven tool could potentially be hardening historical systemic biases against people with disabilities.
The county describes its predictive risk modeling tool as a preferred resource to reduce human error for social workers benefiting from the algorithm’s rapid analysis of “hundreds of data elements for each person involved in an allegation of child maltreatment.” That includes “data points tied to disabilities in children, parents, and other members of local households,” Allegheny County told AP. Those data points contribute to an overall risk score that helps determine if a child should be removed from their home.
Although the county told AP that social workers can override the tool’s recommendations and that the algorithm has been updated “several times” to remove disabilities-related data points, critics worry that the screening tool may still be automating discrimination. This is particularly concerning because the Pennsylvania algorithm has inspired similar tools used in California and Colorado, AP reported. Oregon stopped using its family-screening tool over similar concerns that its algorithm may be exacerbating racial biases in its child welfare data.
It’s a classic parlor game: Which three people from history would you invite to dinner?
Now, a new app brings the experience to your phone with help from an artificial intelligence chatbot, allowing users to have text conversations with robots meant to simulate the perspectives of notable people from history, from Babe Ruth to Adolf Hitler.
The app, called Historical Figures, has begun to take off in the two weeks since it was released as a way to have conversations with any of 20,000 notable people from history.
But this week, it sparked viral controversy online over its inclusion of Hitler, his Nazi lieutenants and other dictators from the past.
“Are neo-Nazis going to be attracted to this site so they can go and have a dialogue with Adolf Hitler?” asked Rabbi Abraham Cooper, the director of global social action for the Simon Wiesenthal Center, a Jewish human rights organization.
The app, created by a 25-year-old Amazon software engineer, is part of the latest rush in tech to build on top of AI software such as ChatGPT, an advanced chatbot prototype that burst onto the scene less than two months ago.
The world of privacy is a constant battlefield. It’s not a static decision where once you’ve done this one single step, you’re now good until the end of time. Instead, you have to stay abreast of the research, studying the ways that privacy is constantly being diminished so that you can then take the appropriate steps to respond.
A robot powered by artificial intelligence is set to become the world’s first “robot lawyer” and will take on speeding ticket cases in court next month, its creators have said.
Joshua Browder, the CEO of Startup DoNotPay, which bills itself as “the home of the world’s first robot lawyer” confirmed the news on Twitter on Monday.
Browder said the company is offering to pay any lawyer or person $1 million to use the AI lawyer in an upcoming case in front of the United States Supreme Court.
“We have upcoming cases in municipal (traffic) court next month. But the haters will say ‘traffic court is too simple for GPT.’ So we are making this serious offer, contingent on us coming to a formal agreement and all rules being followed,” Browder wrote.
The CEO did not provide further details regarding the defendants in the case or the location of the court.
According to DoNotPay’s official website, the company uses artificial intelligence to “help consumers fight against large corporations and solve their problems like beating parking tickets, appealing bank fees, and suing robocallers.”
2022 was a banner year for artificial intelligence, and particularly taking into account the launch of OpenAI’s incredibly impressive ChatGPT, the industry is showing no sign of stopping.
But for some industry leaders, chatbots and image-generators are far from the final robotic frontier. Next up? Consciousness.
“This topic was taboo,” Hod Lipson, the mechanical engineer in charge of the Creative Machines Lab at Columbia University, told The New York Times. “We were almost forbidden from talking about it — ‘Don’t talk about the c-word; you won’t get tenure’ — so in the beginning I had to disguise it, like it was something else.”
Consciousness is one of the longest standing, and most divisive, questions in the field of artificial intelligence. And while to some it’s science fiction — and indeed has been the plot of countless sci-fi books, comics, and films — to others, like Lipson, it’s a goal, one that would undoubtedly change human life as we know it for good.
“This is not just another research question that we’re working on — this is the question,” the researcher continued. “This is bigger than curing cancer.”
“If we can create a machine that will have consciousness on par with a human, this will eclipse everything else we’ve done,” he added. “That machine itself can cure cancer.”
Government spending records have revealed that the Biden Administration is dishing out more than half a million dollars in grants to fund the development of artificial intelligence that will censor language on social media in order to eliminate ‘microaggressions’.
The Washington Free Beacon reports that the funding was part of Biden’s $1.9 trillion ‘American Rescue Plan’ and was granted to researchers at the University of Washington in March to develop technologies that could be used to protect online users from ‘discriminatory’ language.
Judicial Watch president Tom Fitton compared the move to the Chinese Communist Party’s efforts to “censor speech unapproved by the state,” calling it a “project to make it easier for their leftist allies to censor speech.”