Miami Police Used Clearview AI Facial Recognition in Arrest of Homeless Man

Facial recognition technology is increasingly being deployed by police officers across the country, but the scope of its use has been hard to pin down.

In Miami, it’s used for cases big and exceedingly small, as one case Reason recently reviewed showed: Miami police used facial recognition technology to identify a homeless man who refused to give his name to an officer. That man was arrested, but prosecutors quickly dropped the case after determining the officer lacked probable cause for the arrest. 

The case was barely a blip in the daily churn of Miami’s criminal justice system, but it shows the spread of facial recognition technology and the use of retaliatory charges against those who annoy the police.

Lisa Femia, a staff attorney at the Electronic Frontier Foundation (EFF), which advocates for digital privacy rights, calls the case “a particularly egregious example of mission creep with facial recognition technology.”

“It’s often advertised as a way for law enforcement to solve the worst of the worst crimes,” Femia says. “And instead we have law enforcement here using it to harass the homeless.”

According to a police incident report, a man, who Reason is not identifying because he was ultimately not prosecuted, was sleeping on a bench in a parking garage at Miami International Airport on the morning of November 13, 2023, when he was approached by a Miami-Dade County police officer.

“While on routine patrol at the Miami International Airport I observed defendant sleeping on a bench in the Dolphin garage, covered with a blanket and unbagged personal items on airport luggage cart,” the officer wrote in his report. “The bench is provided for passengers waiting for vehicles to and from the airport. It is not designated for housing.”

The report notes that Miami-Dade police have been directed to address homelessness at the airport and that the officer initiated contact to see if the man had been previously issued a trespass warning.

The man didn’t have an ID, and he gave the officer a fake name and 2010 date of birth.

“Defendant was obviously not a 13-year-old juvenile,” the report says. “I provided defendant several opportunities to provide correct information and he refused.”

Under Florida law, police can demand identification from a pedestrian only when there is reasonable suspicion that they have committed a crime. For example, two Florida sheriff’s deputies were disciplined in 2022 after they arrested a legally blind man for refusing to show his ID.

This officer had other means at his disposal, though. “I identified defendant via facial recognition from Clearview, with assistance from C. Perez, analyst at the MDPD real time crime center,” the report says.

Keep reading

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Keep reading

AI Fraud Act Could Outlaw Parodies, Political Cartoons, and More

Mixing new technology and new laws is always a fraught business, especially if the tech in question relates to communication. Lawmakers routinely propose bills that would sweep up all sorts of First Amendment-protected speech. We’ve seen a lot of this with social media, and we’re starting to see it with artificial intelligence. Case in point: the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act. Under the auspices of protecting “Americans’ individual right to their likeness and voice,” the bill would restrict a range of content wide enough to ensnare parody videos, comedic impressions, political cartoons, and much more.

The bill’s sponsors, Reps. María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Pa.), say they’re concerned about “AI-generated fakes and forgeries,” per a press release. They aim to protect people from unauthorized use of their own images and voices by defining these things as the intellectual property of each individual.

The No AI Fraud Act cites several instances of AI being used to make it appear that celebrities created ads or art that they did not actually create. For instance, “AI technology was used to create the song titled ‘Heart on My Sleeve,’ emulating the voices of recording artists Drake and The Weeknd,” states the bill’s text. AI technology was also used “to create a false endorsement featuring Tom Hanks’ face in an advertisement for a dental plan.”

But while the examples in the bill are directly related to AI, the bill’s actual reach is much more expansive, targeting a wide swath of “digital depictions” or “digital voice replicas.”

Salazar and Dean say the bill balances people’s “right to control the use of their identifying characteristics” with “First Amendment protections to safeguard speech and innovation.” But while the measure does nod to free speech rights, it also expands the types of speech deemed legally acceptable to restrict. It could mean way more legal hassles for creators and platforms interested in exercising their First Amendment rights, and result in a chilling effect on certain sorts of comedy, commentary, and artistic expression.

Keep reading

Bill Gates Hopes AI Can Reduce “Polarization,” Save “Democracy,” Ignores Censorship Implications

The notion that whoever controls and shapes AI could potentially wield significant influence over large swathes of society could be one of the most alarming and prominent over the next few years.

In a recent episode of “Unconfuse Me with Bill Gates,” Sam Altman, the CEO of OpenAI, and tech billionaire Bill Gates controversially delved into the potential of artificial intelligence (AI) as a tool for maintaining democracy and promoting world peace.

The discussion was aired on January 11, 2024.

Read the transcript for the episode here.

The conversation explored the idea of using artificial intelligence as an instrument to foster unity in society, enhance global amity, and help overcome geopolitical polarization.

Microsoft, founded by Gates, and OpenAI, whose CEO Altman is currently working closely with Microsoft, are promoters of using AI to solve global issues.

Gates spoke excitedly on the topic: “I do think AI, in the best case, can help us with some hard problems…Including ‘polarization’ because potentially that breaks democracy and that would be a super bad thing.”

In addition to resolving polarization, the two heavyweights also discussed the notion of AI potentially acting as a peacemaking tool.

Keep reading

OPENAI QUIETLY DELETES BAN ON USING CHATGPT FOR “MILITARY AND WARFARE”

OPENAI THIS WEEK quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

Up until January 10, OpenAI’s “usage policies” pageOpens in a new tab included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policyOpens in a new tab retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable,” and which includes many other substantial language and formatting changes.

Keep reading

“Disinformation Doomsday Scenario”: AI-Powered Propaganda Is The Latest Threat To Humanity (That Must Be Censored)

The Trump-Russia hoax was one of the most notable disinformation operations in modern history. A major component of the hoax was the notion that Russia had influenced the 2016 US election through disinformation, and tricked the American public into electing Donald Trump.

In the fullness of time of course, it was revealed that the Clinton campaign, Obama administration, and their allies in corporate media had peddled fabricated information themselves. Yet, the threat of ‘disinformation’ has blossomed into an entire ecosystem of collaboration between governments and private think tanks which has been used to censor free speech around the globe. 

To that end, the World Economic Forum from has now declared “Disinformation” to be the world’s greatest threat according to their 2024 “Global Risks Report,” which will obviously require more control over free speech.

Keep reading

Pentagon Aims to Create a Human-Machine Soldier as Part of Dangerous New Artificial Intelligence Race

“In 1962, J.C.R. Licklider created the US Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA). His vision, published two years earlier in his seminal work Man–Computer Symbiosis (Licklider 1960), heralded an ambitious, and ultimately successful, push to develop artificial intelligence (AI) technologies. The Agency, now called DARPA with the D emphasizing its focus on defense applications, has supported AI research, as popularity has ebbed and flowed, over the past 60 years.”[1]

The Pentagon has been at the forefront of researching and developing artificial intelligence technologies for use in warfare and spying since the early 1960s, primarily through the Defense Advanced Research Projects Agency (DARPA).[2] According to the Brookings Institution, 87% of the value of federal contracts over the five years 2017-2022 that had the term “artificial intelligence” in the contract description were with the Department of Defense.[3] This article reviews the Pentagon’s current application of AI technologies.[4]

Keep reading

DATA SOLUTIONS PROVIDER TELUS INTERNATIONAL IS PAYING $50 FOR IMAGES OF KIDS TO TRAIN GOOGLE’S AI

In a recent initiative, Google and TELUS International, a subsidiary of the Canadian tech conglomerate TELUS, have collaborated to collect biometric data from children for age verification purposes. This project, running from November 2023 to January 2024, involved parents filming their children’s faces, capturing details such as eyelid shape, skin tone, and facial geometry. Parents who participated were paid $50 per child.

First reported by 404media, the project requested that parents take 11 short videos of their children while wearing things like face masks or hats. Another request was for children’s faces with no coverings at all. Each video must be less than 40 seconds, and participants were expected to spend 30 to 45 minutes on the task.

According to the summary document, which has now been taken down, a TELUS International moderator would be on a call while the parent took these videos of the child.

According to TELUS International, the purpose of this project was to capture a diverse range of biometric data to ensure that their customer’s services and products are representative of various demographics. Google told 404media that the goal was to enhance authentication methods, thus providing more secure tools for users. 

“As part of our commitment to delivering age-appropriate experiences and to comply with laws and regulations around the world, we’re exploring ways to help our users verify their age. Last year, TELUS helped us find volunteers for a project exploring whether this could be done via selfies. From there, Google collected videos and images of faces, clearly explaining how the content would be used, and, as with all research involving minors, we required parental consent for participants under the age of 18. We’ve also put strict privacy protections in place, including limiting the amount of time the data will be retained and providing all participants the option to delete their data at any time,” Google told 404media in a statement.

While this aligns with Google’s broader commitment to developing responsible and ethical facial recognition technology, the project has raised significant concerns regarding children’s privacy and consent.

Parents had to consent to Google and TELUS International collecting their child’s personal and biometric information in order to participate. This included the shape of their eyelids, the color of their skin and their “facial geometry.” According to the TELUS International summary, Google would then keep the data for five years at most, which for some participants, would be into their early adulthood.

Keep reading

Half Of All Skills Will Be Outdated Within Two Years, Study Suggests

Executives believe nearly half of the skills that exist in today’s workforce won’t be relevant just two years from now, thanks to artificial intelligence. And a lot of that includes their own skills. This startling proclamation came out of a recent survey of 800 executives and 800 employees released by edX, an online education platform.

The executives estimate that nearly half (49%) of the skills that exist in their workforce today won’t be relevant in 2025. The same number, 47%, believe their workforces are unprepared for the future workplace. Identifying skills shortages is not a surprising result to come out of an educational platform provider, but the short timespan is an eye-opener.

Executives in the survey estimate that within the next five years, their organizations will eliminate over half (56%) of entry-level knowledge worker roles because of AI. What’s more, 79% of executives predict that entry-level knowledge worker jobs will no longer exist as AI creates an entirely new suite of roles for employees entering the workforce. On top of that, 56% say their own roles will be “completely” or “partially” replaced by AI.

However, there are industry leaders who are skeptical of such heavy-handed doom-laden predictions. “In my view, the immediate impact of AI on career goals is likely to be minimal,” says Richard Jefts, executive vice president and general manager at HCL Software. “While many companies claim to be leveraging AI, the reality is that most are still in the early stages of adoption.” Expect more of a longer-term impact on careers as AI matures, he says.

Keep reading

AI Watermarking Is Advocated by Biden’s Advisory Committee Member, Raising Concerns for Parody and Memes

The Biden administration doesn’t seem quite certain how to do it – but it would clearly like to see AI watermarking implemented as soon as possible, despite the idea being marred by many misgivings.

And, even despite what some reports admit is a lack of consensus on “what digital watermark is.” Standards and enforcement regulation are also missing. As has become customary, where the government is constrained or insufficiently competent, it effectively enlists private companies.

With the standards problem, these seem to none other than tech dinosaur Adobe, and China’s TikTok.

It’s hardly a conspiracy theory to think the push mostly has to do with the US presidential election later this year, as watermarking of this kind can be “converted” from its original stated purpose – into a speech-suppression tool.

The publicly presented argument in favor is obviously not quite that, although one can read between the lines. Namely – AI watermarking is promoted as a “key component” in combating misinformation, deepfakes included.

And this is where perfectly legal and legitimate genres like parody and memes could suffer from AI watermarking-facilitated censorship.

Spearheading the drive, such as it is, is Biden’s National Artificial Intelligence Advisory Committee and now one of its members, Carnegie Mellon University’s Ramayya Krishnan, admits there are “enforcement issues” – but is still enthusiastic about the possibility of using technology that “labels how content was made.”

From the Committee’s point of view, a companion AI tool would be a cherry on top.

However, there’s still no actual cake. Different companies are developing watermarking which can be put in three categories: visible, invisible (i.e., visible only to algorithms), and based on cryptographic metadata.

Keep reading