Comprehensive data privacy laws go into effect in 8 more states this year

This year, comprehensive privacy laws are going into effect in eight states to regulate how businesses handle digital information and to give consumers more protections over their personal data.

The laws in DelawareIowaMinnesotaNebraskaNew HampshireNew Jersey and Tennessee have taken effect already this year, according to a database from the International Association of Privacy Professionals’ Westin Research Center. Maryland’s privacy law, signed by Democratic Gov. Wes Moore last year, will go into effect Oct. 1.

Privacy laws enacted in IndianaKentucky and Rhode Island will go into effect next year.

Several other states are considering comprehensive privacy bills during this year’s legislative sessions. They include MassachusettsMichiganNorth CarolinaPennsylvania and Wisconsin.

When a person visits a website, applies to a job or logs into an online portal, they may be sharing their personal information. Comprehensive privacy laws can apply to a wide range of companies that participate in this kind of data collection.

These laws generally include two types of provisions — those related to consumer rights and those that establish business obligations, according to the association.

Under each of the new laws, consumers have the right to control when and how their data is collected and shared. Some of those provisions include the right to delete data from a company’s database, the ability to opt out of sharing sensitive information and the right to prohibit a company from selling their data.

The new measures also require businesses to ask consumers if they want to opt in to data collection. In some states, businesses are required to complete consumer data risk assessments and identify ways in which discrimination could take place. Some companies also may be required to limit to how consumer data is processed.

Keep reading

Google’s Android Lockdown: Are You Really In Control Of Your Phone?

Android’s new rule requires all app developers to submit personal information to Google, even for apps outside the Play Store. Critics argue this threatens user freedom and ignores solutions…

Android, Google’s mobile operating system, announced on August 25 that it will be requiring all app developers to verify their identity with the organization before their apps can run on “certified android devices.”

While this might sound like a common-sense policy by Google, this new standard is not just going to be applied to apps downloaded from Google Play store, but all apps, even those “side-loaded” — installed directly into devices by side-stepping the Google Play store. Apps of the sort can be found online in Github repositories or on project websites and installed on Android devices directly by downloading the installation files (known as APKs). 

What this means is that, if there is an application that Google does not like, be it because it does not conform to its policies, politics or economic incentives, they can simply keep you from running that application on your own device. They are locking down Android devices from running applications not with their purview. The ask? All developers, whether submitting their apps through the Play store or not, need to give their personal information to Google. 

The decision begs the question, if you can not run whatever app you want on your device without the permission of Google, then is it really your device? How would you respond if Windows decided you could only install programs from the Microsoft app store?

The move has of course made news in tech and cybersecurity media and caused quite a stir as it has profound consequences for the free and open web. For years, Android has been touted as an open source operating system, and through this strategy has gained massive distribution throughout the world with users in developing countries where Apple’s “walled garden” model and luxury devices are not affordable.

This new policy will tighten up controls over applications and its developers, and threatens the freedom to run whatever software you like on your own device in a very subversive and legalistic way. Because of Google’s influence over the Android variety of phones, the consequences of this policy are likely to be felt by the majority of users and devices, throughout the world.

Keep reading

Microsoft Word To Save New Files to the Cloud by Default

Microsoft is preparing to change how documents are saved in Word for Windows, shifting new file storage to the cloud by default.

Instead of asking users to activate AutoSave or select a cloud location manually, Word will now store all newly created documents directly in OneDrive or another designated cloud service automatically.

Raul Munoz, a product manager on Microsoft’s Office shared services and experiences team, described the change by saying, “We are modernizing the way files are created and stored in Word for Windows. Now you don’t have to worry about saving your documents: Anything new you create will be saved automatically to OneDrive or your preferred cloud destination.”

Currently being rolled out to Microsoft 365 Insiders, this new setup is presented as a way to prevent lost work and provide immediate access to files across mobile platforms and browsers.

However, for anyone working outside Microsoft’s cloud ecosystem, this change introduces additional steps to avoid online storage.

The update also comes with adjustments to how documents are named. Rather than appending sequential numbers to new files, Word will now assign file names based on the date of creation.

Users will have the option to set a preferred default save location or opt out of automatic cloud saves entirely, though doing so requires manual reconfiguration.

Microsoft has been steadily nudging its user base toward cloud reliance. AutoSave already defaults to cloud storage, and persistent prompts in Windows have encouraged, or pressured, users to turn on OneDrive backups.

These reminders have drawn complaints, especially from those who feel Microsoft is eroding straightforward local file access.

Keep reading

Is your baby, doorbell or security cam spying for China? Florida’s top cop wants to know

Florida’s top law enforcement official has issued a subpoena to Lorex Corp., a top maker of baby monitors, security and doorbell cameras, demanding documents and information about its corporate structure, whether it has any ties to Chinese Communist firms and whether Americans’ data or privacy can be breached. Those documents could provide evidence of illegal activity.

Attorney General James Uthmeier’s office told Just the News he believes Lorex, though North American-based, has imported large swaths of equipment from a Chinese manufacturer banned from the United States over alleged human rights abuses and national security risks.

A spokesperson for Lorex did not immediately respond to a written request for comment sent via email to its corporate public relations account.

Probe into whether products are relabeled from black-listed maker

“Lorex Corporation is importing millions of devices from CCP-controlled Dahua, which has been banned in the United States for human rights abuses and national security risks,” the office said in a statement to Just the News. “AG Uthmeier must discover whether Lorex is selling re-labeled Dahua products which would introduce a range of cybersecurity vulnerabilities that would give the CCP a direct line into the homes and private lives of millions of Floridians.”

Dahua, a Chinese technology company, acquired the Canadian-based Lorex in 2018 but sold it to Taiwan-based Skywatch nearly three years ago after Dahua was blacklisted in the United States.

The Pentagon in 2022 listed Dahua as one of 13 companies doing business with the Chinese military and banned its products in the United States. Earlier, the Commerce Department in 2020 identified Dahua as one of several Chinese firms involved in human rights abuses with alleged slave labor involving Uighur minorities.

In 2023, the Australian government expressed alarm when it found about 1,000 security cameras in its various offices tied to Dahua and another Chinese-tied firm, ordering a sweeping review of all security equipment in its government facilities.

The Florida attorney general’s subpoena was issued Friday, and shortly afterwards, Uthmeier put out a statement on X advising Florida consumers about his actions and possible vulnerabilities in Lorex products they may own.

“What consumers do not know is that data might be shared with the Chinese military,” he said. “Imagine that. Footage of your baby in a crib going to the Chinese government. This is unacceptable. It is a national security issue, and it will not be tolerated.”

Keep reading

University of Melbourne Broke Victoria’s Privacy Law by Using Wi-Fi to Monitor Protesters on Campus

The University of Melbourne’s covert surveillance tactics during a campus protest have been declared unlawful, following a ruling by Victoria’s deputy information commissioner that the institution broke the state’s privacy laws.

The decision condemns the university’s quiet use of digital tracking tools against students and staff involved in a pro-Palestine demonstration, raising serious concerns about the growing use of surveillance technologies in academic settings.

We obtained a copy of the decision for you here.

Prompted by media attention earlier this year, the investigation focused on how the university responded to a May protest held inside the Arts West building.

Rather than relying on open dialogue or standard disciplinary processes, university officials resorted to monitoring individuals through the campus Wi-Fi network, matching connection data with student ID photos and security camera recordings.

A total of 22 students were identified through this process, all without prior warning or a clear legal basis. Staff were surveilled as well, with the contents of ten employees’ email accounts examined to uncover involvement in the demonstration. Three of them later received formal warnings.

Although the commissioner’s office accepted that CCTV footage was used within legal boundaries, it found the use of Wi-Fi tracking in disciplinary investigations to be unjustified.

The monitoring of staff emails was also flagged for breaching expected privacy norms.

Keep reading

Michigan Supreme Court Rules Unrestricted Phone Searches Violate Fourth Amendment

The Michigan Supreme Court has drawn a firm line around digital privacy, ruling that police cannot use overly broad warrants to comb through every corner of a person’s phone.

In People v. Carson, the court found that warrants for digital devices must include specific limitations, allowing access only to information directly tied to the suspected crime.

We obtained a copy of the opinion for you here (the opinion starts on page 5).

Michael Carson became the focus of a theft investigation involving money allegedly taken from a neighbor’s safe.

Authorities secured a warrant to search his phone, but the document placed no boundaries on what could be examined.

It permitted access to all data on the device, including messages, photos, contacts, and documents, without any restriction based on time period or relevance. Investigators collected over a thousand pages of information, much of it unrelated to the accusation.

The court ruled that this kind of expansive warrant violates the Fourth Amendment, which requires particularity in describing what police may search and seize.

The justices said allowing law enforcement to browse through an entire phone without justification amounts to an unconstitutional exploratory search.

Smartphones now serve as central hubs for people’s lives, containing everything from health records and banking details to travel histories and intimate conversations.

Searching a device without limits can expose a volume and variety of personal information that far exceeds what a physical search could reveal.

Groups including the Electronic Frontier Foundation, ACLU National, and the ACLU of Michigan intervened in the case, filing a brief that called on the court to adopt strict rules for digital searches.

Keep reading

The Right to Be Left Alone

What if the federal government captures in real time the contents of every telephone call, email and text message and all the fiber-optic data generated by every person and entity in the United States 24/7? What if this mass surveillance was never authorized by any federal law and tramples the Fourth Amendment?

What if this mass surveillance has come about by the secret collusion of presidents and their spies in the National Security Agency and by the federal government forcing the major telephone and computer service providers to cooperate with it? What if the service providers were coerced into giving the feds continuous physical access to their computers and thus to all the data contained in and passing through those computers?

What if President George W. Bush told the NSA that since it is part of the Defense Department and he was the commander in chief of the military, NSA agents could spy on anyone, notwithstanding any court orders or statutes that prohibited it? What if Bush believed that his orders to the military were not constrained by the laws against computer hacking that Congress had written or the interpretations of those laws by federal courts or even by the Constitution?

What if Congress has written laws that all presidents have sworn to uphold and that require a warrant issued by a judge before the NSA can spy on anyone but Bush effectively told the NSA to go through the motions of getting a warrant while spying without warrants on everyone in the U.S. all the time? What if Presidents Barack Obama, Joe Biden and Donald Trump have taken the same position toward the NSA and ordered or permitted the same warrantless and lawless spying?

What if the Constitution requires warrants based on probable cause of criminal behavior before surveillance can be conducted but Congress has written laws reducing that standard to probable cause of communicating with a foreign national? What if a basic principle of constitutional law is that Congress is subject to the Constitution and therefore cannot change its terms or their meanings?

What if the Constitution requires that all warrants particularly describe the place to be searched or the person or thing to be seized? What if the warrants Congress permits the NSA to use violate that requirement by permitting a federal court — the FISA Court — to issue general warrants? What if general warrants do not particularly describe the place to be searched or the person or thing to be seized but rather authorize the bearer to search indiscriminately through service providers’ customer data?

What if the government has no moral, constitutional or legal right to personal information about and from all of us without a valid search warrant consistent with constitutional requirements?

Keep reading

Chrome VPN Extension With 100k Installs Screenshots All Sites Users Visit

FreeVPN.One, a Chrome extension with over 100,000 installs and a verified badge on the Chrome Web Store, is exposed by researchers for taking screenshots of users’ screens and exfiltrating them to remote servers.

A Koi Security investigation of the VPN tool reveals that it has been capturing full-page screenshots from users’ browsers, logging sensitive visual data like personal messages, financial dashboards, and private photos, and uploading it to aitd[.]one, a domain registered by the extension’s developer.

Koi Security’s forensic analysis showed that the surveillance mechanism is triggered automatically, within seconds of loading any web page. Using Chrome’s privileged chrome.tabs.captureVisibleTab() API, screenshots are silently taken in the background and bundled with metadata including page URLs, tab IDs, and unique user identifiers. This data is then transmitted to the attacker-controlled server aitd.one/brange.php, without user interaction or visible indication.

The spying behavior is powered by a two-stage architecture:

  1. A content script injected into every visited site using matches (http:///, https:///).
  2. A background service worker that listens for an internal captureViewport message and initiates the screenshot capture.

The extension also promotes an “AI Threat Detection” feature which, when clicked, captures another screenshot and sends it to aitd.one/analyze.php. However, the real issue lies in the fact that screenshots are being taken long before users ever interact with this feature, making the user interface a decoy.

Koi Security further explains that the latest version of the extension, v3.1.4, introduced AES-256-GCM encryption with RSA key wrapping to obfuscate the exfiltrated data, making it harder to detect or analyze with network monitoring tools.

Keep reading

Big Tech Could Soon Use Brain Chips To Read Your Innermost Thoughts: Study

A new study out of Stanford University reveals that neural implants, also known as brain-computer interfaces (BCIs), might not just help paralyzed individuals communicate – they could potentially lay bare your innermost thoughts to Big Tech.

Published in the medical journal Cell, the research shows these devices can decode brain signals to produce synthesized speech faster and with less effort.

BCIs work by using tiny electrode arrays to monitor activity in the brain’s motor cortex, the region controlling speech-related muscles. Until now, the tech relied on signals from paralyzed individuals actively trying to speak. The Stanford team, however, discovered that even imagined speech generates similar, though weaker, signals in the motor cortex. With the help of artificial intelligence, they translated those faint signals into words with up to 74% accuracy from a 125,000-word vocabulary.

“We’re recording the signals as they’re attempting to speak and translating those neural signals into the words that they’re trying to say,” said Erin Kunz, a postdoctoral researcher at Stanford’s Neural Prosthetics Translational Laboratory.

But this technological leap has raised red flags among critics who warn of a dystopian future where your private thoughts could be exposed.

Nita Farahany, a Duke University law and philosophy professor and author of The Battle for Your Brain, sounded the alarm telling NPR, “The more we push this research forward, the more transparent our brains become.”

Farahany expressed concern that tech giants like Apple, Google, and Meta could exploit BCIs to access consumers’ minds without consent, urging safeguards like passwords to protect thoughts meant to stay private.

We have to recognize that this new era of brain transparency really is an entirely new frontier for us,” Farahany said.

While the world fixates on artificial intelligence, some of the tech industry’s heaviest hitters are pouring billions into BCIs. Elon Musk, the world’s richest man, has raised $1.2 billion for his Neuralink venture, which is now conducting clinical trials with top institutions like the Barrow Neurological Institute, The Miami Project to Cure Paralysis, and the Cleveland Clinic Abu Dhabi.

Now, another tech titan is entering the fray.

Keep reading

Thousands Of Grok chats Now Searchable On Google

Hundreds of thousands of conversations that users had with Elon Musk’s xAI chatbot Grok are easily accessible through Google Search, reports Forbes.

Whenever a Grok user clicks the “share” button on a conversation with the chatbot, it creates a unique URL that the user can use to share the conversation via email, text, or on social media. According to Forbes, those URLs are being indexed by search engines like Google, Bing, and DuckDuckGo, which in turn lets anyone look up those conversations on the web. 

Users of Meta‘s and OpenAI‘s chatbots were recently affected by a similar problem, and like those cases, the chats leaked by Grok give us a glimpse into users’ less-than-respectable desires — questions about how to hack crypto wallets; dirty chats with an explicit AI persona; and asking for instructions on cooking meth. 

xAI’s rules prohibit the use of its bot to “promote critically harming human life” or developing “bioweapons, chemical weapons, or weapons of mass destruction,” though that obviously hasn’t stopped users from asking Grok for help with such things anyway.

According to conversations made accessible by Google, Grok gave users instructions on making fentanyl, listed various suicide methods, handed out bomb construction tips, and even provided a detailed plan for the assassination of Elon Musk.

xAI did not immediately respond to a request for comment. We’ve also asked when xAI began indexing Grok conversations.

Late last month, ChatGPT users sounded the alarm that their chats were being indexed on Google, which OpenAI described as a “short-lived experiment.” In a post Musk quote-tweeted with the words “Grok ftw,” Grok explained that it had “no such sharing feature” and “prioritize[s] privacy.”

Keep reading