Telegram Will Now Share Users’ IP Addresses and Phone Numbers With Governments in Response to Legal Requests

Telegram, the messaging app that once positioned itself as the rebel’s answer to Big Tech surveillance, has made a sharp U-turn on the “we protect your data at all costs” highway. On Monday, the company quietly updated its privacy policy to allow for the disclosure of user information—like those precious IP addresses and phone numbers—to law enforcement, but only, of course, if they present a valid legal request.

As we all know, no one has ever stretched the definition of “valid” to fit their agenda, right?

This revelation comes hot on the heels of a little incident back in August, when Telegram’s CEO Pavel Durov found himself in handcuffs, detained by French authorities. What was the crime? Well, it appears Telegram was accused of playing hardball with French law enforcement, refusing to hand over data, leading to Durov’s arrest. It seems law enforcement didn’t take kindly to that level of noncompliance, especially after making 2,460 unanswered requests for information.

Keep reading

Senators open inquiry into Kamala Harris’ failure to deliver $42b internet program promised to rural Americans

Nine senators have opened an inquiry into Kamala Harris’s failure to deliver on her promises as the country’s “broadband czar.”

At the heart of the inquiry is a $42.45 billion program aimed at providing rural America with high-speed internet. Despite this significant investment, the program has not connected a single individual to the internet after more than 1,000 days.

FCC commissioner Brendan Carr has been expressing his concerns about this major failure on X and recently shared a new letter penned by nine senators about the mismanagement of this program under Harris.

The letter begins by outlining her failure and comparing it to her massive failures as border czar, noting: “It appears that your performance as “broadband czar” has mirrored your performance as “border czar,” marked by poor management and a lack of effectiveness despite significant federal broadband investments and your promises to deliver broadband to rural areas.”

It explains how under the Infrastructure Investment and JOBS Act, the National Telecommunications and Information Administration was given $42.45 billion to carry out the Broadband, Equity, Access, and Deployment program, also known as BEAD, to bring broadband access to rural areas and to other unserved communities.

The letter cites Harris’s promise that “we can bring broadband to rural America today” and points out that three years later, not a single individual in these communities has received this connectivity.

Keep reading

Kamala Says Quiet Part Out Loud: Government ‘Oversight’ Needed to Police the Internet and Rein in Thought Crimes

The global crackdown on free speech continues unabated, and if you think it’s relegated to countries like China, Cuba, and North Korea, think again.

A Brazilian Supreme Court panel on Monday upheld a decision to suspend Elon Musk’s social media platform X in the country.

Last Friday, Justice Alexandre de Moraes ordered the platform blocked.

Since the decision, Musk and his supporters have tried to paint de Moraes as a renegade and an authoritarian censor of political speech. But Brazil is no outlier. The European Union is also cracking down on free speech with its Online Service Act, and we have our own free-speech haters here in the United States.

In fact, one of them is a candidate for president.

Kamala Harris came out in 2019 and admitted in the wide open that she wants to see the same type of state-sponsored censorship implemented in the United States as what we now see taking shape in Brazil.

The government, Harris believes, should have ultimate control over what you and I are allowed to post on the internet, as well as more control over what we are allowed to see and hear. She doesn’t trust us to decide for ourselves as to the truth or accuracy of what we see or read online.

Keep reading

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety

Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantagedminority and LGBTQ youth.

The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good.

That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety?

Keep reading

Lockheed Martin Develops System to Identify and Counter Online “Disinformation,” Prototyped by DARPA

Various military units around the world (notably in the UK during the pandemic) have been getting involved in what are ultimately, due to the goal (censorship) and participants (military) destined to become controversial, if not unlawful efforts.

But there doesn’t seem to be a lot of desire to learn from others’ mistakes. The temptation to bring the defense system into the political “war on disinformation” arena seems to be too strong to resist.

Right now in the US, Lockheed Martin is close to completing a prototype that will analyze media to “detect and defeat disinformation.”

And by media, those commissioning the tool – called the Semantic Forensics (SemaFor) program – mean everything: news, the internet, and even entertainment media. Text, audio, images, and video that are part of what’s considered “large-scale automated disinformation attacks” are supposed to be detected and labeled as false by the tool.

The development process is almost over, and the prototype is used by the US Defense Department’s Defense Advanced Research Projects Agency (DARPA).

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading

BRAZILIAN CONFUSION: Hefty Fines for Accessing Social Media Platform X via VPN Were NOT Rescinded – What Changed Was that VPNs Are Not Outright Banned In the Country Anymore

Brazil, thy name is confusion.

There is a saying here in the ‘tropical country’ that says: ‘Brazil is for professionals‘.

Being born and raised here, we are used to a maze of bureaucracy and a general lack of clarity in all public matters.

Following the blocking of the social platform X in Brazil, a question that was raised by freedom lovers worldwide was the usage of VPN’s by Brazilian users to bypass this spurious prohibition.

This was highly anticipated by our Supreme Court overlords, who decided in a first moment to prohibit the usage of VPN, as well as instituting a 50k reals (over $9k) fine for using VPNs to access X.

This decision was later partially reformed, and that’s where the confusion started.

Some social media users (I saw it posted by DogeDesigner/@cb_doge and also by Charlie Kirk/@Charliekirk11) are suggesting that the fines for accessing X via VPN were rescinded – which would in fact be a victory for free speech.

But that is not the case – as much as I can find.

In fact, what changed is the previous decision to make VPN forbidden in Brazil. That is no longer the case, which is good news for the 75 million VPN users in Brazil.

But it is still forbidden to use this technology to access X, and the fines are still on, although there is some level of push back from the Order of Attorneys of Brazil (OAB).

Keep reading

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn’t create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

Keep reading

Shut It Down

The recent protests in Bangladesh have led to another example of a national government shutting down the internet and telecommunications. The Bangladeshi government claimed that the shutdown was implemented to stop misinformation. In 2023 the internet was shut down in Libya after a natural disaster to prevent criticism of the local authorities and their response to the emergency. At this time, thirty-nine nations across the world at some time have shut down the internet for one reason or the other. What was once a speculative concept has now become a practice that will soon be accepted.

The United Nations has made access to the internet a right; intentionally denying individuals access to the internet is considered a human rights violation. Though when it comes to human rights, national governments have a tendency to use international bodies such as the UN as a reason for action while dismissing such “rules” for themselves. Such rules are bent, ignored, and broken whenever national governments see fit.  According to Access Now, in 2023 alone there were 283 known internet shut downs used by governments against their citizens, India being the most prolific. The world’s biggest democratic government sees fit to exercise control of information and the communications over those it rules.

Large corporations have a tendency to work with national governments so that they may operate in those nations. Russia and China have provisions to isolate their internet access from the rest of the world, along with “kill switches.” The Australian government has passed laws allowing its federal government to “shut down the net” should its leadership see fit. The potential exists for most nations to do this. All that is needed is a crisis. The provision for a “threat to national interest” allows for governments to cut individuals off from the world and one another.

In Syria the internet was even shut down during high school exams in an attempt to stop students from cheating. Given the extreme rigidity of study and examination for schooling in nations like South Korea, such a reason could also be used there as well. Cultural and state directed interests are going to be key reasons as to why information and communications are controlled and denied. It will vary according to the self-interest of particular regimes and national flavors.

The U.S. government attempted to pass the right to use an internet “kill switch” but scrutiny prevented it from being allowed. With populist leaders and panic mongering of the forever changing crises on the horizon, it is likely that such an option will someday be on the table. It is of no surprise that the United Kingdom has in its power to impose such a shut down. The public is assured that failsafes exist to prevent it from being abused (though given the British government’s fear of memes, it may not really take much).

In a crisis, information and communications are crucial. Advocates for state power and a strong central authority agree, which is why they don’t want them spread. The belief that angels rule the nation and wise magicians control the economy is pervasive and resonates the world over. Information and communications are a sacred act of defiance against evil and authoritarianism in its many variants.

Keep reading

Fascism 2.0 – The changing face of social media censorship

Facebook make only about £34 a year from the average customer in the UK – a little under £3 a month (and that’s before costs) so clearly there is no head-room or motivation, for a human level of customer service or attention. The user is not the customer; rather, they are the product whose data is sold to advertisers.

Thus, users do not have a direct customer relationship with the platform. The network is not directly incentivised to “care” about the user before the advertiser. And no matter where you lie on the spectrum between “free speech absolutism” and “private entities have the right to censor any user”, with such low margins it is inevitable machine processing will have to be used to moderate posts and deal with the customer interface.

But it is a fact the customer processing and management capabilities Social Networks are now evolving is being utilised in a variety of ways beyond just moderation. And it is also true this automated processing is being done at scale and is now applied to every post every member makes. 68% of US voters are on Facebook. In the UK it’s 66% and France 73.2%. The figures are similar for every democratic nation in the West. So it is vitally important the applied rules should be politically neutral.

The power that exists within the ability to machine-process every users posts is far deeper and more profound than perhaps many realise. And while it can’t directly dictate what users write in their messages it has the capacity to fundamentally shape which messages gain traction.

Social Media services have become de-facto town squares and most would agree their corporate owners should avoid ever putting a hand on the scales and influencing politics.

Additionally as everyone who uses Facebook is aware, especially when it comes to politically sensitive topics, the system will qualify an individual’s reach; sometimes to an extreme degree. Or that user will simply be banned for a period of time, or banned from the network entirely.

So we can ask the question, since the social media corporations have so much censorship power, how do we know they aren’t engaging in unethical political interference? Can they be trusted with the responsibility?

I will return to this question, but it’s clear that trust in these corporations is deeply misplaced.

Keep reading