Plutocracy Uses Technology to Clobber the Poor

Technology wielded by oligarchic government is a nightmare. From killer police dog robots to facial recognition in public housing, it’s not just the poor who are targets, it’s everybody. But the poor and the left get smacked with it the worst. It’s open season on antifa, a season inaugurated by killer Kyle Rittenhouse shooting two Black Lives Matter protestors to death, getting off scott free and becoming the darling of far-right celebrities. As for how technology crushes the poor, just take the case of 33-year-old Tania Acabou, who found herself a victim of constant surveillance in her public housing project.

Cameras bought through the department of Housing and Urban Development have been installed in public housing, supposedly to fight crime. Instead, the poor domiciled there find themselves under continuous watch. “It got to the point where it was like harassment,” Acabou told the Washington Post after being evicted from her New Bedford, Massachusetts project due to this surveillance. The Post reported May 16 that Acabou received “an eviction notice in 2021 after the housing authority…used cameras to investigate her over several months…The housing authority believed her ex was living at the house without contributing rent [he was babysitting their kids]…violating a policy that restricts overnight visitors to 21 nights per year.”

In a Steubenville, Ohio project, the Post added, “One man was filmed spitting in a hallway. A woman was recorded removing a cart from a communal laundry room. Footage in both cases was presented to a judge to help evict the residents in court.” So if you’re poor, you live under a security microscope with the excuse that it fights crime, when really it just fights you. One woman, threatened with eviction “for lending her key fob to an unauthorized guest,” explained that her declining vision necessitated a friend bringing her groceries. She was allowed to stay.

HUD used federal crime fighting grants to buy the cameras. But as anyone with a brain can deduce, this surveillance is aimed at public housing residents, not criminals. Or maybe, as far as HUD’s concerned, the residents are the criminals…While several states have limited police use of facial recognition, as the Post notes, because it produces false matches, HUD does not appear to have caught on. “In rural Scott County, Va., cameras equipped with facial recognition scan everyone who walks past them, looking for people barred from public housing,” according to the Post. “In New Bedford, Mass., software is used to search hours of recordings to find any movement near the doorways of residents suspected of violating overnight guest rules.” So if you reside in public housing or visit someone there, you are treated as a potential lawbreaker.

Keep reading

Turn Off, Don’t Automate, the Killing Machine

The quest to develop and refine technologically advanced means to commit mass homicide continues on, with Pentagon tacticians ever eager to make the military leaner and more lethal. Drone swarms already exist, and as insect-facsimile drones are marketed and produced, we can expect bug drone swarms to appear soon in the skies above places where suspected “bad guys” are said to reside—along with their families and neighbors. Following the usual trajectory, it is only a matter of time before surveillance bug drones are “upgraded” for combat, making it easier than ever to kill human beings by whoever wishes to do so, whether military personnel, factional terrorists, or apolitical criminals. The development of increasingly lethal and “creative” means to commit homicide forges ahead not because anyone needs it but because it is generously funded by the U.S. Congress under the assumption that anything labeled a tool of “national defense” is, by definition, good.

To some there may seem to be merits to the argument from necessity for drones, given the ongoing military recruitment crisis. There are many good reasons why people wish not to enlist in the military anymore, but rather than review the missteps taken and counterproductive measures implemented in the name of defense throughout the twenty-first century, administrators ignore the most obvious answer to the question why young people are less enthusiastic than ever before to sign their lives away. Why did the Global War on Terror spread from Afghanistan and Iraq to engulf other countries as well? Critics have offered persuasive answers to this question, above all, that killing, torturing, maiming, and terrorizing innocent people led to an outpouring of sympathy for groups willing to resist the invaders of their lands. As a direct consequence of U.S. military intervention, Al Qaeda franchises such as ISIS emerged, proliferated, and spread. Yet the military plows ahead undeterred in its professed mission to eliminate “the bad guys,” with the killers either oblivious or somehow unaware that they are the primary creators of “the bad guys.”

Meanwhile, the logic of automation has been openly and enthusiastically embraced as the way of the future for the military, as in so many other realms. Who needs soldiers anyway, given that they can and will be replaced by machines? Just as grocery stores today often have more self-checkout stations than human cashiers, the military has been replacing combat pilots with drone operators for years. Taking human beings altogether out of the killing loop is the inevitable next step, because war architects focus on lethality, as though it were the only measure of military success. Removing “the human factor” from warfare will increase lethality and may decrease, if not eliminate, problems such as PTSD. But at what price?

Never a very self-reflective lot, war architects have even less inclination than ever before to consider whether their interventions have done more harm than good because of the glaring case of Afghanistan. After twenty years of attempting to eradicate the Taliban, the U.S. military finally retreated in 2021, leaving the Islamic Emirate of Afghanistan (as they now refer to themselves) in power, just as they were in 2001. By focusing on how slick and “neat” the latest and greatest implements of techno-homicide are, those who craft U.S. military policy can divert attention from their abject incompetence at actually winning a war or protecting, rather than annihilating, innocent people.

Keep reading

Pentagon Now Actively Hunting For UFOs With Purpose-Built Sensors

Only between 2% and 5% of the 800 cases currently being investigated by the Pentagon’s All-domain Anomaly Resolution Office (AARO) “display signatures that could reasonably be described as anomalous,” the head of that office testified during a NASA hearing Wednesday. But while the majority of those cases “demonstrate mundane characteristics of readily explainable sources,” a “large number” of those sightings are “technically unresolved…primarily due to a lack of data associated with those cases,” AARO Director Sean M. Kirkpatrick testified. That’s a key reason, he said, why AARO has been developing its own “purpose-built sensors” to detect, track and characterize objects in suspected hot spots.

“Without sufficient data, we are unable to reach defendable conclusions that meet the high scientific standards we set for resolution,” Kirkpatrick said at NASA’s UAP Independent Study Team hearing Wednesday. UAP, or unidentified aerial phenomena, is the new term for UFOs.

Keep reading

Japan: AI Systems Can Use Any Data, from Any Source – Even Illegal Ones

While other countries are mulling where to put the brakes on AI development, Japan is going full steam ahead, with the government recently announcing that no data will be off-limits for AI.

In a recent meeting, Keiko Nagaoka, Japanese Minister of Education, Culture, Sports, Science, and Technology, confirmed that no law, including copyright law, will prevent AIs from accessing data in the country.

AIs will be allowed to use data for training, “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise,” said Nagaoka.

The decision is a blow to copyright holders who argue that AI using their intellectual property to produce new content undermines the very concept of copyright. The issue has already emerged in the west — an AI-generated song using the voice of Drake and The Weeknd went viral on streaming services in April, before being swiftly removed.

In the west, much of the discourse around AI is focused on potential harms. AI leaders recently warning governments that development of the technology carries with it a “risk of extinction,” while news companies worry about deepfakes and “misinformation.”

The Biden Administration’s leftist regulators at the FTC, meanwhile, worry that “historically biased” data (such as crime data with racial imbalances) will lead to outcomes that conflict with “civil rights.” Many leftist agitators in the west want to cut off AIs from such data.

Keep reading

Elon Musk’s Neuralink Gets FDA Approval to Study Brain Implants in Humans

Elon Musks’s neurotechnology company Neuralink announced on Thursday it has obtained approval from the U.S. Food and Drug Administration (FDA) to carry out a clinical study of brain implants in humans.

It marks the first in-human clinical study for the company.

“This is the result of incredible work by the Neuralink team in close collaboration with the FDA and represents an important first step that will one day allow our technology to help many people,” the company said in a statement.

“Recruitment is not yet open for our clinical trial. We’ll announce more information on this soon!” it added, without providing further details about the trial.

Musk, in response, wrote on Twitter: “Congratulations Neuralink team!”

Keep reading

Los Angeles City Council approves ‘robot dog’ donation to police

A four-legged “robot dog” is the newest member of the Los Angeles Police Department after the City Council voted 8-4 to approve its acquisition on Tuesday.

The robot – a quadruped unmanned ground vehicle called “Spot” – is manufactured by Boston Dynamics and is valued at nearly $280,000.

Spot was first offered as a gift to the LAPD’s Metropolitan Division by the Los Angeles Police Foundation in March. The foundation is a nonprofit group that has “awarded more than $44 million in grants to the LAPD” since 1998, according to its website.

LAPD said the robot will be used in a limited number of scenarios including “incidents involving active shooters, assessment of explosives, hostage situations, natural disasters, hazardous materials assessment, barricaded suspects and search and rescue missions.”

Spot will not be equipped with any weapons systems, facial recognition software or analysis capabilities, officials said. It will also not be used for routine patrol duties or covert surveillance operations.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

Insect Ears Inspire Superefficient Microphones 

Insect ears are inspiring the design of tiny 3D-printed microphones that could pinpoint a sound’s direction, replacing the much bulkier, energy-hungry gear currently needed for such purposes, researchers say.

The insect ear possesses a thin sheet of tissue, known as the tympanum, that is much like the human eardrum. Sound waves make this membrane vibrate, and the sensory apparatus within the ear converts these vibrations into nerve signals.

Although an insect’s tympanum is typically a millimeter or so wide, insects are capable of feats of hearing that currently require devices much larger in size. For instance, to pinpoint which direction a gunshot came from, the vehicle-mounted Boomerang system from Raytheon depends on a microphone array roughly a half-meter wide. In comparison, the nocturnal moth Achroia grisella can also identify which direction sounds are coming from, and can do so with just one tympanum only about half a millimeter wide. (The moth likely evolved this skill for both detecting predatory bats and ultrasonic mating calls.)

In order to mimic what insect ears can accomplish, scientists at first attempted to copy insect structures with silicon microelectromechanical systems (MEMS). However, the resulting devices lacked the flexibility and the microscopic 3D structural variations seen in real insect ears that help them hear so well, says Andrew Reid, an electrical engineer of the University of Strathclyde, in Glasgow.

Now Reid and his colleagues are experimenting with 3D printing to more faithfully copy insect ears. He detailed his team’s research at the annual meeting of the Acoustical Society of America on 10 May in Chicago. The research builds upon the team’s earlier work to understand how insects have such stellar directional hearing.

The researchers have 3D printed a variety of membranes to copy a range of insect tympana. The base material for these membranes is typically a flexible hydrogel such as polyethylene glycol diacrylate. The membranes also often include a piezoelectric material such as the perovskite oxide crystal known as PMN-PT, which can convert acoustic energy to electric signals, and electrically conductive silver-based compounds, Reid says.

Keep reading

Human-induced stem cells from Seattle now in space

Scientists are taking a deeper dive into the impacts of micro-gravity on the human body. Seattle’s Allen Institute is playing a key role in this experiment.

This particular mission,” Allen Institute for Cell Science scientist Brock Roberts said, “will provide yet another test for the fundamental capability of these stem cells.”

On Monday morning, SpaceX’s Falcon 9 rocket, carrying the Dragon Spacecraft with four Axiom-2 crewmembers inside, docked at the International Space Station (ISS). Also on that flight are human-induced pluripotent stem cells produced by scientists at the Allen Institute. This is the first time cells from the Allen Institute have traveled to space.

The Axiom-2 crew members will spend eight days at the ISS. The four astronauts will conduct scientific experiments, which include observing the effects of micro-gravity on cell growth and development.

The stem cells are capable of many amazing things they can differentiate into many different tissues,” Roberts said. “They can proliferate indefinitely without changing their fundamental character, but we don’t know a lot about their ability to exist and preform all of those fascinating phenomenon in space, we we will find out about that.”

The stem-cell study is part of a series of NASA-funded experiments led by researchers at Cedars-Sinai Medical Center in Los Angeles.

Keep reading

Consumer Group Warns Smartphone Facial Recognition Apps Are Vulnerable to Spoofing

Smartphone face biometrics from many leading brands are vulnerable to spoof attacks with 2D photographs, according to a new report from UK-based consumer testing and review group Which?, according to Yahoo Finance UK.

The group says the vulnerability is “unacceptable,” and has “worrying implications” for user’s security.

On-device biometrics are used for device unlocking and local authentication, while KYC processes for customer onboarding and strong remote identity verification is typically carried out with server-side biometrics and other signals, with a layer of liveness or presentation attack detection.

The phones tested include Honor, Motorola, Nokia, Oppo, Samsung, Vivo and Xiaomi handsets. Apple’s 3D FaceID biometrics were not fooled by the photos. The devices tested range in price from £89.99 to nearly £1,000 (approximately US$112 to $1,244), but the majority of phones that failed the test are lower-cost or mid-range models.

Out of 48 new smartphone models tested, 60 percent were not vulnerable to spoofing with a photograph.

Keep reading