First Patient Dosed With Experimental Cancer-Killing Virus in New Trial

Scientists dosed the first patient this week in a small clinical trial of an experimental cancer treatment—one that relies on a novel kind of ally. The treatment uses a virus engineered to selectively kill cancer cells, while also amplifying the body’s immune response to the cancer. The hope is that this therapy can help those with advanced solid tumor cancers, in combination with other existing drugs.

The CF33-hNIS virus, also called Vaxinia, was originally created by researchers at the City of Hope National Medical Center in California. It’s now being jointly developed with the company Imugene Limited.

Vaxinia is billed as an oncolytic virus, meaning it prefers to target and infect tumor cells. Scientists have been hopeful about using these kinds of viruses to directly kill off cancer cells for more than a century, but with limited success so far. In recent years, some teams have decided to explore a slightly different plan of attack. This genetically modified virus not only infects and harms cancer cells, but also forces these cells to become more recognizable to the immune system.

This strategy, the researchers hope, will then allow other treatments that also boost our immune response to cancer cells to be more effective, particularly against hard-to-target solid tumors. These treatments are collectively known as immunotherapy. In early animal and lab experiments, the virus has been shown to reduce the size of colon, lung, breast, ovarian, and pancreatic cancer tumors.

Keep reading

Scientists ‘really surprised’ after gene-editing experiment unexpectedly turn hamsters into hyper-aggressive bullies

A team of neuroscience researchers was left “really surprised” after a gene-editing experiment unexpectedly created hyper-aggressive hamsters, according to a statement by Georgia State University (GSU).

The GSU research, published in Proceedings of the National Academy of Sciences (PNAS), set out to find more about the biology behind the social behavior of mammals.

The scientists used Syrian hamsters and CRISPR-Cas9 — a revolutionary technology that makes it possible to turn on or off genes in cells. The technology knocked out a receptor of vasopressin — a hormone associated with enhanced aggression.

The scientists anticipated that doing so would “dramatically” alter the social behavior of the Syrian hamsters, making them more peaceful. It did change their behavior, but not how they had expected.

“We were really surprised at the results,” said the study’s lead author, GSU professor H. Elliott Albers, in the university’s statement.

“We anticipated that if we eliminated vasopressin activity, we would reduce both aggression and social communication,” Albers continued. “But the opposite happened.”

The hamsters without the receptor displayed “high levels of aggression” towards hamsters of the same sex compared to their counterparts with the receptors intact, the study said.

Keep reading

Your iPhone Is Vulnerable to Hacking Even When Turned Off

A new report has revealed that iPhones are vulnerable to malware attacks even when they’re turned off.

Wired reports that according to a recent study from researchers at Germany’s Technical University of Darmstadt, iPhone devices are still vulnerable to malware attacks even when powered off. When turning an iPhone off, chips inside the device still run in a low-power state making it possible to locate the lost or stolen device using the Find My app.

Now, researchers have developed a method to run malware on iPhones even when the devices appear to be powered off. The Bluetooth chip in all iPhones has no way to digitally sign or encrypt the firmware it runs, researchers have now developed a method to exploit the lack of security on the chip and run malicious firmware allowing the researchers to track the iPhone’s location or run new features.

In a recently published paper, the researchers studied the risk posed by chips running in a low-power mode that allows chips responsible for NFC, ultra-wideband, and Bluetooth to run in a more that can remain active for 24 hours after a device is turned off.

Keep reading

Transfusion of brain fluid from young mice is a memory-elevating elixir for old animals

For a human, one of the first signs someone is getting old is the inability to remember little things; maybe they misplace their keys, or get lost on an oft-taken route. For a laboratory mouse, it’s forgetting that when bright lights and a high-pitched buzz flood your cage, an electric zap to the foot quickly follows.

But researchers at Stanford University discovered that if you transfuse cerebrospinal fluid from a young mouse into an old one, it will recover its former powers of recall and freeze in anticipation. They also identified a protein in that cerebrospinal fluid, or CSF, that penetrates into the hippocampus, where it drives improvements in memory.

The tantalizing breakthrough, published Wednesday in Nature, suggests that youthful factors circulating in the CSF, or drugs that target the same pathways, might be tapped to slow the cognitive declines of old age. Perhaps even more importantly, it shows for the first time the potential of CSF as a vehicle to get therapeutics for neurological diseases into the hard-to-reach fissures of the human brain.

“This is the first study that demonstrates real improvement in cognitive function with CSF infusion, and so that’s what makes it a real milestone,” said Maria Lehtinen, a neurologist at Boston Children’s Hospital and Harvard Medical School, who was not involved in the new research. “The super-exciting direction here is that it lends support to the idea that we can harness the CSF as a therapeutic avenue for a broad range of conditions.”

Keep reading

You’ve Been Flagged as a Threat: Predictive AI Technology Puts a Target on Your Back

“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman

You’ve been flagged as a threat.

Before long, every household in America will be similarly flagged and assigned a threat score.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

Keep reading

Rand Paul: Time to ban feds from tracking Americans through their cellphone location data

Sen. Rand Paul (R-Ky.), a fierce protector of freedom and privacy, says it is time to ban federal agencies from being able to track Americans’ behavior by buying their cell phone location data from commercial vendors.

“When the government is trying to snoop on your behavior, it’s wrong, and there should be laws against it,” Paul told the “Just the News, Not Noise” television show in an exclusive interview aired Wednesday night.

Paul’s comments came after newly released government documents revealed that the Centers for Disease Control and Prevention (CDC) tracked Americans’ compliance with pandemic lockdowns by buying and monitoring their cellphone geospatial data from commercial vendors.

Such data is collected on each American from apps they use on their smart phones and sold by third-party brokers unless a user explicitly opts out of such collection for each app. Increasingly, law enforcement and other government agencies have been acquiring the data for official work, though the CDC was the first publicly disclosed use to track private Americans’ health behavior.

The data also was bought and used by the election integrity group True the Vote to identify people suspected of illegally collecting ballots in the 2020 Georgia election, a revelation that has prompted a formal investigation by the Georgia Secretary of State’s office.

Keep reading

First Human Trials Starts For Brain Computer Interface

Synchron Inc., which develops a so-called brain-computer interface and competes with Elon Musk’s Neuralink Corp., enrolled the first patient in its U.S. clinical trial, putting the company’s implant on a path toward possible regulatory approval for wider use in people with paralysis.

The early feasibility study is funded by the National Institutes of Health and will evaluate the safety of the device, known as the Stentrode, the New York-based company said. It will also assess how effective the Stentrode is in helping patients control digital devices hands-free.

The trial represents a landmark for Synchron, the first startup working on brain-machine interfaces to begin a clinical trial seeking approval to sell its product. It also puts Synchron ahead of Neuralink, which is better funded but is still recruiting a trial director. Neuralink raised $205 million last year. Synchron has raised $70 million total.

Keep reading

Mental health and worship apps are found to be some of the most privacy invasive

Apps that deal with some of the most sensitive and personal data, such as that concerning a user’s mental health or religious activities, are said to rank among the worst privacy offenders.

This is the conclusion of a study conducted by the Mozilla Foundation, which singled out mental health and prayer apps as being prone to track and collect data revealing a person’s state of mind, feelings, and thoughts, and then “share” that for-profit via targeted advertising.

Mozilla’s team looked into 32 apps from this category, putting a “privacy not included” label on 29, and publishing the findings in a guide of the same name. 25 of these apps didn’t pass the foundations’ minimum security standards around password quality and handling of security updates.

PTSD Coach, developed by the US The Department of Veterans Affairs, has “strong privacy policies and security practices,” while chatbot Wysa “seems to value users’ privacy.” And the Catholic prayer app Hallow was the only one to “respond in a timely manner” to Mozilla’s emails.

Besides these technical issues, the apps singled out in the report are also said to target “vulnerable users with personalized advertisements” and track and share biometric data.

Keep reading

Americans warned: Police now authorized to track you via cell phone

Police across America now can track citizens through their cell phones – without a warrant – despite the Fourth Amendment’s ban on warrantless searches, according to a team of civil-rights lawyers at the Rutherford Institute.

That’s the result of the U.S. Supreme Court deciding not to intervene in a lower court decision that authorized exactly that.

The institute had filed a friend-of-the-court brief in the case Hammond v. U.S. that challenged the tracking of people through their cell phones as unconstitutional.

That tracking can tell police a person’s location with great precision, “whether that person is at home, at the library, a political event, a doctor’s office, etc.,” the organization reported.

“Americans are being swept up into a massive digital data dragnet that does not distinguish between those who are innocent of wrongdoing, suspects, or criminals. Cell phones have become de facto snitches, offering up a steady stream of digital location data on users’ movements and travels,” said constitutional attorney John W. Whitehead, president of The Rutherford Institute.

“Added to that, police are tracking people’s movements by way of license plate toll readers; scouring social media posts; triangulating data from cellphone towers and WiFi signals; layering facial recognition software on top of that; and then cross-referencing footage with public social media posts, all in an effort to identify, track and eventually round us up. This is what it means to live in a suspect society,” he said.

Keep reading

An algorithm that screens for child neglect raises concerns

Inside a cavernous stone fortress in downtown Pittsburgh, attorney Robin Frank defends parents at one of their lowest points – when they are at risk of losing their children.

The job is never easy, but in the past she knew what she was up against when squaring off against child protective services in family court. Now, she worries she’s fighting something she can’t see: an opaque algorithm whose statistical calculations help social workers decide which families will have to endure the rigors of the child welfare system, and which will not.

“A lot of people don’t know that it’s even being used,” Frank said. “Families should have the right to have all of the information in their file.”

From Los Angeles to Colorado and throughout Oregon, as child welfare agencies use or consider tools similar to the one in Allegheny County, Pennsylvania, an Associated Press review has identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system. Related issues have already torpedoed some jurisdictions’ plans to use predictive models, such as the tool notably dropped by the state of Illinois.

According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.

Keep reading