Hackers can unlock over 3 million hotel doors in seconds

When thousands of security researchers descend on Las Vegas every August for what’s come to be known as “hacker summer camp,” the back-to-back Black Hat and Defcon hacker conferences, it’s a given that some of them will experiment with hacking the infrastructure of Vegas itself, the city’s elaborate array of casino and hospitality technology. But at one private event in 2022, a select group of researchers were actually invited to hack a Vegas hotel room, competing in a suite crowded with their laptops and cans of Red Bull to find digital vulnerabilities in every one of the room’s gadgets, from its TV to its bedside VoIP phone.

One team of hackers spent those days focused on the lock on the room’s door, perhaps its most sensitive piece of technology of all. Now, more than a year and a half later, they’re finally bringing to light the results of that work: a technique they discovered that would allow an intruder to open any of millions of hotel rooms worldwide in seconds, with just two taps.

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries.

By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

“Two quick taps and we open the door,” says Wouters, a researcher in the Computer Security and Industrial Cryptography group at the KU Leuven University in Belgium. “And that works on every door in the hotel.”

Keep reading

Feds Can Film Your Front Porch for 68 Days Without a Warrant, Says Court

Law enforcement in Kansas recorded the front of a man’s home for 68 days straight, 15 hours a day, and obtained evidence to prove him guilty on 16 charges. The officers did not have a search warrant, using a camera on a pole positioned across the street to capture Bruce Hay’s home. A federal court ruled on Tuesday that it was fine for law enforcement to do so, in what’s potentially a major reduction in privacy law.

“Mr. Hay had no reasonable expectation of privacy in a view of the front of his house,” said the U.S. Tenth Circuit Court of Appeals in its decision on U.S. vs Hay. “As video cameras proliferate throughout society, regrettably, the reasonable expectation of privacy from filming is diminished.”

Hay, an Army veteran, was found guilty of lying about his disability status to collect benefits from the Department of Veteran Affairs (VA). However, the concerning part of this case stems from how VA officers collected evidence against Hay. The veteran appealed his case, arguing that the months-long surveillance of his home crossed a line. However, the federal court ruled that law enforcement can videotape the outside of your home, partially because of how prominent video cameras have become in society.

The federal court’s decision says that video cameras have become “ubiquitous,” and have therefore diminished our expectations of privacy. Police officers wear body cameras now, cellphones have cameras, and many doorbells record your porch. The court isn’t wrong that cameras are everywhere.

Keep reading

How to Figure Out What Your Car Knows About You (and Opt Out of Sharing When You Can)

Cars collect a lot of our personal data, and car companies disclose a lot of that data to third parties. It’s often unclear what’s being collected, and what’s being shared and with whom. A recent New York Times article highlighted how data is shared by G.M. with insurance companies, sometimes without clear knowledge from the driver. If you’re curious about what your car knows about you, you might be able to find out. In some cases, you may even be able to opt out of some of that sharing of data.

Why Your Car Collects and Shares Data

A car (and its app, if you installed one on your phone) can collect all sorts of data in the background with and without you realizing it. This in turn may be shared for a wide variety of purposes, including advertising and risk-assessment for insurance companies. The list of data collected is long and dependent on the car’s make, model, and trim.  But if you look through any car maker’s privacy policy, you’ll see some trends:

  • Diagnostics data, sometimes referred to as “vehicle health data,” may be used internally for quality assurance, research, recall tracking, service issues, and similar unsurprising car-related purposes. This type of data may also be shared with dealers or repair companies for service.
  • Location information may be collected for emergency services, mapping, and to catalog other environmental information about where a car is operated. Some cars may give you access to the vehicle’s location in the app.
  • Some usage data may be shared or used internally for advertising. Your daily driving or car maintenance habits, alongside location data, is a valuable asset to the targeted advertising ecosystem. 
  • All of this data could be shared with law enforcement.
  • Information about your driving habits, sometimes referred to as “Driving data” or “Driver behavior information,” may be shared with insurance companies and used to alter your premiums.  This can range from odometer readings to braking and acceleration statistics and even data about what time of day you drive.. 

Surprise insurance sharing is the thrust of The New York Times article, and certainly not the only problem with car data. We’ve written previously about how insurance companies offer discounts for customers who opt into a usage-based insurance program. Every state except California currently allows the use of telematics data for insurance rating, but privacy protections for this data vary widely across states.

When you sign up directly through an insurer, these opt-in insurance programs have a pretty clear tradeoff and sign up processes, and they’ll likely send you a physical device that you plug into your car’s OBD port that then collects and transmits data back to the insurer.

Keep reading

Your car is secretly spying on you and driving your insurance rates through the roof: report

Drivers of cars manufactured by General Motors, Honda and other popular brands say that their insurance rates went up after the companies sent data about their driving behavior to issuers without their knowledge.

Kenn Dahl, 65, is a Seattle-area businessman who told The New York Times that his car insurance costs soared by 21% in 2022 after GM’s OnStar Smart Driver computerized system installed in his Chevy Bolt collected information about the particulars of his driving habits.

Dahl said that his insurance agent told him the price increase was based on data collected by LexisNexis, which compiled a report tracking each and every time he and his wife drove their Chevy Bolt over a six-month period.

According to Dahl, the 258-page report contained information about the start and end times of his trips, distance driven and other data detailing possible instances of speeding, hard braking and sharp accelerations.

The report contained information about one particular trip in June which lasted 18 minutes and spanned 7.33 miles

During that same trip, the LexisNexis report recorded two instances of rapid acceleration and two incidents of hard braking.

Keep reading

Study Estimates Nearly 96% of Private Property Is Open to Warrantless Searches

Police can traipse onto the vast majority of private property in the country without a warrant thanks to a century-old Supreme Court decision, according to a new study by the Institute for Justice, a libertarian-leaning public-interest law firm.

In a study published in the spring 2024 issue of Regulation, a publication of the Cato Institute, Institute for Justice attorney Josh Windham and research analyst David Warren estimate that at least 96 percent of all private land in the country is excluded from Fourth Amendment’s warrant requirement under the “open-fields doctrine,” which allows police to forego warrants when they searched fields, woods, vacant lots, and other property not near a dwelling.

That adds up to nearly 1.2 billion acres open to government trespass, and the Institute for Justice says that’s a conservative estimate. The organization also says the study is the first attempt to quantify how much private property is affected by the Supreme Court’s 1924 ruling in Hester v. U.S., which created the doctrine.

“Now we have hard data showing that the Supreme Court’s century-old error blew a massive hole in Americans’ property and privacy rights,” Windham said in a press release. “Now we know what the open fields doctrine really means: Government officials can treat almost all private land in this country like public property.”

Windham added that “courts and lawmakers across the country will have to face the consequences of keeping this doctrine on the books.” 

Keep reading

Hackers can read private AI assistant chats even though they’re encrypted

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Keep reading

Report: Federal Government Asked Big Banks to Surveil Purchases of VPNs and Gift Cards, Transfers to Crowdsourcing Sites

In January, the House Judiciary Committee sounded the alarm about the federal government asking banks to surveil transactions related to certain keywords, such as “MAGA” and “Trump,” as part of investigations into January 6, 2021 at the Capitol. But new documents obtained by the House Judiciary Select Subcommittee on the Weaponization of the Federal Government have revealed that the surveillance that was initially identified by the House Judiciary Committee in January was much broader than these early reports suggested.

The House Judiciary Committee’s initial letter about this financial surveillance revealed that the Financial Crimes Enforcement Network (FinCEN) sent several financial institutions lists of terms that it deemed to be indicators of potential violent extremism and suggested that banks use these search terms to flag suspect transactions. These lists included terms such as MAGA and Trump and also recommended searching for more generic terms, such as terms related to purchases of transportation and terms related to purchases of books (including religious texts) and other media that FinCEN deemed to be “extremist.”

These new documents, which were shared in a report titled “Financial Surveillance in the United States: How Federal Law Enforcement Commandeered Financial Institutions to Spy on Americans,” show that the list of terms FinCEN asked banks and financial institutions to flag was much wider.

In one document, FinCEN brands lawful activities, such as “frequent ATM withdrawals and wire transfers with no apparent economic or business purpose” and “purchases that appear excessive or unusual for hobbyist or other legitimate use,” as potential indicators of violent extremism.

Keep reading

Proposition E Would Make It Easier for Police To Surveil San Francisco

On March 5, San Franciscans will have the opportunity to vote on a ballot measure that would decide whether or not to make them into guinea pigs for surveillance experiments by the San Francisco Police Department (SFPD).

Proposition E purports to streamline the SFPD, with sections on community engagement, recordkeeping, and the department’s vehicle pursuit and use of force policies. But its portion on department use of surveillance technology is troubling.

Under an existing ordinance passed in 2019, the SFPD may only use “surveillance technologies”—like surveillance cameras, automatic license plate readers, or cell site simulators—that have been approved by the San Francisco Board of Supervisors, the city and county legislative body. The process requires that the SFPD, like any other city or county agency, submit a policy to the board for approval before using any new technology. The 2019 ordinance also banned the use of facial recognition technology.

But Prop E adds a clause stipulating that the SFPD “may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy.”

In other words, the SFPD could roll out an unapproved method of surveillance, and it would have free rein to operate within the city for up to a year before ever having to ask city officials for permission. And until the city passes a statute that specifically forbids it—that is, forbidding a technology that is by that point already in use—then the SFPD can keep using it indefinitely.

“Let’s say the SFPD decides they want to buy a bunch of data on people’s geolocation from data brokers—they could do that,” says Saira Hussain, a staff attorney at the Electronic Frontier Foundation (EFF). “They could use drones that are flying at all times above the city. They could use the robot dogs that were piloted at the border. These are all surveillance technologies that the police doesn’t necessarily have right now, and they could acquire it and use it, effectively without any sort of accountability, under this proposition.”

If those scenarios sound implausible, it’s worth noting that they’ve already happened: As Hussain notes, the Department of Homeland Security recently tested robot dogs to help patrol the U.S./Mexico border. And in 2012, the Los Angeles County Sheriff’s Department enlisted civilian aircraft to fly over Compton and surveil the entire area.

Not to mention, federal agencies already routinely purchase people’s cell phone geolocation information and internet metadata without a warrant.

Keep reading

California Democrats Introduce Bill That Would Force Homeowners and Renters to Disclose Number of Firearms to Insurance Companies, Government

For years, California Democrats have been hostile to gun owners. California Democrats frequently attempt to erode Second Amendment rights in the state.

A bill in the Democrat-controlled California State Assembly that was introduced on February 16th, would force homeowners and renters to disclose information about firearms they own. Assembly member Mike Gipson, and State Senator Catherine Blakespear are the two leading California Democrat lawmakers pushing this legislation.

Section 2086 will be an addition to the Insurance Code pertaining to AB-3067.

The questions include information as to the number of firearms in the home, the method of storage, and how many firearms are stored in vehicles on the property. The questions include whether or not the firearms are in locked containers or not.

Keep reading

Google Update Reveals AI Will Read All Your Private Messages

There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”

But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”

And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.

There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.

Keep reading