BRAZILIAN CONFUSION: Hefty Fines for Accessing Social Media Platform X via VPN Were NOT Rescinded – What Changed Was that VPNs Are Not Outright Banned In the Country Anymore

Brazil, thy name is confusion.

There is a saying here in the ‘tropical country’ that says: ‘Brazil is for professionals‘.

Being born and raised here, we are used to a maze of bureaucracy and a general lack of clarity in all public matters.

Following the blocking of the social platform X in Brazil, a question that was raised by freedom lovers worldwide was the usage of VPN’s by Brazilian users to bypass this spurious prohibition.

This was highly anticipated by our Supreme Court overlords, who decided in a first moment to prohibit the usage of VPN, as well as instituting a 50k reals (over $9k) fine for using VPNs to access X.

This decision was later partially reformed, and that’s where the confusion started.

Some social media users (I saw it posted by DogeDesigner/@cb_doge and also by Charlie Kirk/@Charliekirk11) are suggesting that the fines for accessing X via VPN were rescinded – which would in fact be a victory for free speech.

But that is not the case – as much as I can find.

In fact, what changed is the previous decision to make VPN forbidden in Brazil. That is no longer the case, which is good news for the 75 million VPN users in Brazil.

But it is still forbidden to use this technology to access X, and the fines are still on, although there is some level of push back from the Order of Attorneys of Brazil (OAB).

Keep reading

Academic Freedom Around the World Declining for First Time Since WWII

The halls of academia have long been regarded as bastions of free thought and scientific inquiry. However, a recent study paints a concerning picture of dwindling academic freedom worldwide. This shift, occurring for the first time since World War II, threatens to undermine global innovation at a time when creative solutions may be needed more than ever.

The research, conducted by a team of international researchers, reveals that after decades of steady improvement, global academic freedom has begun to decline over the past decade. This shift represents the first significant downturn since World War II and raises serious concerns about the future of innovation and scientific advancement.

Academic freedom, the right of scholars to pursue research, teach, and express ideas without undue interference, has long been considered a cornerstone of scientific progress. However, its importance to innovation has never been quantitatively measured on a global scale until now. The study’s findings not only confirm the crucial role of academic freedom in driving innovation but also sound a warning about the potential consequences of its current decline.

To investigate this relationship, the researchers analyzed data from 157 countries over a 115-year period, from 1900 to 2015. They used the Academic Freedom Index (AFI) to measure the level of academic freedom in each country and compared it to innovation output, measured by the number of patent applications and citations.

The results, published in PLOS One, were striking. Countries with higher levels of academic freedom consistently produced more patents and received more citations on those patents. Specifically, when a country’s academic freedom increased by one standard deviation, the number of patent applications rose by 41% two years later, and the number of citations increased by 29% five years later.

However, the most alarming finding was the recent downward trend in academic freedom. After steadily increasing from the 1940s to the 2010s, global academic freedom began to decline in the last decade. This reversal was observed not only globally but also among the 25 leading countries in science.

Based on the study’s findings, the researchers project that the recent decrease in academic freedom could lead to a substantial reduction in innovation output in the coming years. This could manifest as fewer new patents and a decrease in impactful research, potentially slowing technological progress and economic growth.

Keep reading

How AI’s left-leaning biases could reshape society

The new artificial intelligence (AI) tools that are quickly replacing traditional search engines are raising concerns about potential political biases in query responses.

David Rozado, an AI researcher at New Zeland’s Otago Polytechnic and the U.S.-based Heterodox Academy, recently analyzed 24 leading language models, including OpenAI’s GPT-3.5, GPT-4 and Google’s Gemini. 

Using 11 different political tests, he found the AI models consistently lean to the left. In the words of Rozado, the “homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy.” 

LLMs, or large language models, are artificial intelligence programs that use machine learning to generate and language.

The transition from traditional search engines to AI systems is not merely a minor adjustment; it represents a major shift in how we access and process information, Rozado also argues.

“Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information,” he says. “However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources.”

He also argues the shift in the sourcing of information has “profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society,” with the U.S. presidential election between the GOP’s Donald Trump and the Democrats’ Kamala Harris now just over two months away and expected to be close. 

It’s not difficult to envision a future in LLMs are so integrated into daily life that they’re practically invisible. After all, LLMs are already writing college essays, generating recommendations, and answering important questions. 

Unlike the search engines of today, which are more like digital libraries with endless rows of books, LLMs are more like personalized guides, subtly curating our information diet. 

Keep reading

Backyard Privacy in the Age of Drones

Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. 

Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s:California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges.

Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously.

Keep reading

Military Threat: China’s AI Robots

Last week, the Chinese Communist Party (CCP) organized the World Robot Conference – where they showcased the latest advancements that China’s robotics industry has produced over the past several years.

According to the CCP, China’s humanoid robots are “catching up fast with global rivals,” with advances such as the incorporation of AI into some of its robots that have military capabilities.

We’re picturing mindless robot patrols enforcing the next ‘welded in’ pandemic lockdown, with deadl(ier) results.

As Anders Corr notes in The Epoch TimesChina’s humanoid robots on display at the conference could easily be equipped with weapons and probably already have been. The People’s Liberation Army has demonstrated armed flying drones and quadruped AI robots that resemble dogs with machine guns mounted to their backs. The killer robot dogs can reportedly fire their weapons autonomously.

China’s rapid rise in robotics is state-directed and subsidized to the tune of over $1.4 billion, according to an official announcement in 2023. In 2012, China installed fewer than 15 percent of industrial robots globally. By 2022, that number increased to over 50 percent, with China installing over 250,000, the most in the world. By comparison, Japan and the United States installed just about 50,000 and 40,000, respectively.

In 2016, a Chinese company bought Germany’s Kuka, one of the world’s three leading industrial robot makers. The other two are Japan’s Fanuc and Switzerland’s ABB. Tesla is also a leading robot maker. It plans to deploy 1,000 humanoid Optimus robots in Tesla factories in 2025. Given the close connections of all four of these companies to China, there is a significant risk of technology transfers and IP theft, further driving China’s rapid rise in the robotics space.

Keep reading

Child abuse images removed from AI image-generator training source, researchers say

Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools.

The LAION research dataset is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children.

That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up dataset for future AI research.

Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the “tainted models” that are still able to produce child abuse imagery.

Keep reading

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn’t create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

Keep reading

Ten Reasons Why Human Flourishing Has Reversed

Until recently, most of us welcomed innovation, but something changed. And now a huge number of people are anxious and fearful about the same tech companies they once trusted.

What caused this shift?

That’s a big issue. Unless we understand how things went wrong, we can’t begin to fix them. Otherwise we’re just griping—about bad software or greedy CEOs or whatever.

It’s now time to address the causes, not just complain about symptoms.

Once we do that, we can move to the next steps, namely outlining a regimen for recovery and an eventual cure.

So let me try to lay out my diagnosis as clearly as I can. Below are the ten reasons why tech is now breaking bad.

I apologize in advance for speaking so bluntly. Many will be upset by my frankness. But the circumstances—and the risks involved—demand it.

Keep reading

OpenAI Agrees To Run GPT Models Past US Government To ‘Evaluate’ For Safety

AI companies OpenAI and Anthropic have agreed to run their new AI models past the US government’s AI Safety Institute to evaluate their ‘capabilities and risks,’ as well as ‘collaborate on methods to mitigate potential issues,’ Bloomberg reports.

Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelley, director of the US AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Under agreements announced Thursday by the Commerce Department’s National Institute of Standards and Technology (NIST), in “close collaboration with the UK’s AI Safety Institute,” the government will work to provide feedback on potential safety improvements.

“We strongly support the US AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models,” said OpenAI Chief Strategy Officer Jason Kwon. “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

Anthropic also said it was important to build out the capacity to effectively test AI models. “Safe, trustworthy AI is crucial for the technology’s positive impact,” said Jack Clark, Anthropic co-founder and head of policy. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.” -Bloomberg

The US AI Safety Institute was launched in 2023 as part of the Biden-Harris administration’s Executive Order on AI. The group is tasked with developing the testing, evaluations and guidelines for responsible AI innovation.

Keep reading

A.I – Is This The Beginning Of The End For The Human Race?

The question of whether AI could cause human extinction is a subject of intense debate among experts, futurists, and ethicists. While the potential risks of AI are significant, the possibility of it leading to human extinction depends on several factors, including how AI is developed, governed, and integrated into society. Here’s a breakdown of the possible risk v benefit of this technology, and considerations concerning it’s future use in society:

Since 1950 when Alan Turing wrote his paper ‘Computer intelligence and machinery’, the world has been debating both if and when A.I would be the saviour of human existence or its annihilator. In the hands of lunatics who have a god complex, and only wish to follow their own twisted agenda, it becomes an existential threat to us all. Do we just allow this technology to seep into all aspects of our personal lives, our financial futures, and even within the health systems that we are told to trust in.

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” – Stephen Hawking speaking at the launch of the Leverhulme center for the future of intelligence, Oct 19, 2016.

Keep reading