The end of gambling addiction? Scientists develop brain chip that lower risk-taking in monkeys

Monkeys are natural risk-takers, but when scientists implanted chips into their brains, they became much more careful, according to a new study.

A team of researchers at Kyoto University in Japan used flashes of light from implanted chips to activate two different sections of the macaque monkeys’ brains. 

Switching one on encouraged them to take bigger risks with the hope of a bigger payoff, while switching the other section on led the animals to settle for a smaller but more certain reward.

This research offers insight into the neural roots of gambling addiction, said the researchers behind the study. 

But before digging into the brain, scientists began by figuring out whether their six monkeys liked to gamble.

They trained the macaques to look at different colored spots on a screen to receive a water reward.

Some spots would give the monkey a small reward 90 percent of the time – low risk, low reward. 

Others gave a reward that was 10 times larger, but it only paid out 10 percent of the time – high risk, high reward. 

Overwhelmingly, the monkeys went for the high-risk, high-reward spots. Like a gambler at a slot machine, even though they may lose more often than they win, they gambled with their eye on a big payout.

Next the team tried to figure out which brain areas were in control of this risk-reward calculation. 

Keep reading

PRISONS ACROSS THE U.S. ARE QUIETLY BUILDING DATABASES OF INCARCERATED PEOPLE’S VOICE PRINTS

Roughly six months ago at New York’s Sing Sing prison, John Dukes says he was brought out with cellmates to meet a corrections counselor. He recalls her giving him a paper with some phrases, and offering him a strange choice: He could go up to the phone and utter the phrases that an automated voice would ask him to read, or he could choose not to and lose his phone access altogether.

Dukes did not know why he was being asked to make this decision, but he felt troubled as he heard other men ahead of him speaking into the phone and repeating certain phrases from the sheets the counselors had given them.

“I was contemplating, ‘Should I do it? I don’t want my voice to be on this machine,’” he recalls. “But I still had to contact my family, even though I only had a few months left.”

So, when it was his turn, he walked up to the phone, picked up the receiver, and followed a series of automated instructions. “It said, ‘Say this phrase, blah, blah, blah,’ and if you didn’t say it clearly, they would say, ‘Say this phrase again,’ like ‘Cat’ or ‘I’m a citizen of the United States of America.’” Dukes said he repeated such phrases for a minute or two. The voice then told him the process was complete.

“Here’s another part of myself that I had to give away again in this prison system,” he remembers thinking as he walked back to the cell.

Dukes, who was released in October, says he was never told about what that procedure was meant to do. But contracting documents for New York’s new prison phone system, obtained by The Appeal in partnership with The Intercept, and follow-up interviews with prison authorities, indicate that Dukes was right to be suspicious: His audio sample was being “enrolled” into a new voice surveillance system.

In New York and other states across the country, authorities are acquiring technology to extract and digitize the voices of incarcerated people into unique biometric signatures, known as voice prints. Prison authorities have quietly enrolled hundreds of thousands of incarcerated people’s voice prints into large-scale biometric databases. Computer algorithms then draw on these databases to identify the voices taking part in a call, and to search for other calls where the voices of interest are detected. Some programs, like New York’s, even analyze the voices of call recipients outside prisons to track which outsiders speak to multiple prisoners regularly.

Keep reading

We Absolutely Do Not Need an FDA for AI

I don’t know whether artificial intelligence (AI) will give us a 4-hour workweek, write all of our code and emails, and drive our cars—or whether it will destroy our economy and our grasp on reality, fire our nukes, and then turn us all into gray goo. Possibly all of the above. But I’m supremely confident about one thing: No one else knows either.

November saw the public airing of some very dirty laundry at OpenAI, the artificial intelligence research organization that brought us ChatGPT, when the board abruptly announced the dismissal of CEO Sam Altman. What followed was a nerd game of thrones (assuming robots are nerdier than dragons, a debatable proposition) that consisted of a quick parade of three CEOs and ended with Altman back in charge. The shenanigans highlighted the many axes on which even the best-informed, most plugged-in AI experts disagree. Is AI a big deal, or the biggest deal? Do we owe it to future generations to pump the brakes or to smash the accelerator? Can the general public be trusted with this tech? And—the question that seems to have powered more of the recent upheaval than anything else—who the hell is in charge here?

OpenAI had a somewhat novel corporate structure, in which a nonprofit board tasked with keeping the best interests of humanity in mind sat on top of a for-profit entity with Microsoft as a significant investor. This is what happens when effective altruism and ESG do shrooms together while rolling around in a few billion dollars.

After the events of November, this particular setup doesn’t seem to have been the right approach. Altman and his new board say they’re working on the next iteration of governance alongside the next iteration of their AI chatbot. Meanwhile, OpenAI has numerous competitors—including Google’s Bard, Meta’s Llama, Anthropic’s Claude, and something Elon Musk built in his basement called Grok—several of which differentiate themselves by emphasizing different combinations of safety, profitability, and speed.

Labels for the factions proliferate. The e/acc crowd wants to “build the machine god.” Techno-optimist Marc Andreessen declared in a manifesto that “we believe intelligence is in an upward spiral—first, as more smart people around the world are recruited into the techno-capital machine; second, as people form symbiotic relationships with machines into new cybernetic systems such as companies and networks; third, as Artificial Intelligence ramps up the capabilities of our machines and ourselves.” Meanwhile Snoop Dogg is channeling AI pioneer-turned-doomer Geoffrey Hinton when he said on a recent podcast: “Then I heard the old dude that created AI saying, ‘This is not safe ’cause the AIs got their own mind and these motherfuckers gonna start doing their own shit.’ And I’m like, ‘Is we in a fucking movie right now or what?'” (Hinton told Wired, “Snoop gets it.”) And the safetyists just keep shouting the word guardrails. (Emmett Shear, who was briefly tapped for the OpenAI CEO spot, helpfully tweeted this faction compass for the uninitiated.)

Keep reading

US Military Launches Highly Classified Unmanned Space Plane

The US Space Force launched a secretive plane on Thursday which has been equipped with heavier boosters that could feasibly send it further into orbit than ever before.

The launch marks the 9th flight of the three-core SpaceX Falcon Heavy booster, and the 7th flight of the US Air Force’s (not so) secret unmanned spaceplane, the X-37B (USSF-52).

The launch was previously scheduled for Dec. 10, however it was scrapped due to issues with ground equipment just 30 minutes before liftoff – pushing the event back 18 days.

Officially, the X-37B will enter into various orbits around Earth and serve as a testing ground for NASA’s study of the effects of long-duration exposure to space on organic materials, the Epoch Times reports, adding that the mission will also include experiments having to do with “space domain awareness,” which the US Space Force defines as the ability to “rapidly detect, warn, characterize, attribute, and predict threats to national, allied, and commercial space systems.”

Keep reading

ON THE EVE OF AN A.I. ‘EXTINCTION RISK’? IN 2023, ADVANCEMENTS IN A.I. SIGNALED PROMISE, AND PROMPTED WARNINGS FROM GLOBAL LEADERS

In the field of artificial intelligence, OpenAI, led by CEO Sam Altman, along with the company’s ChatGTP chatbot and its mysterious Q* AI model, have emerged as leading forces within Silicon Valley.

While advancements in AI may hold the potential for positive future developments, OpenAI’s Q* and other AI platforms have also led to concerns among government officials worldwide, who increasingly warn about possible threats to humanity that could arise from such technologies.

2023’S BIGGEST AI UPSET

Among the year’s most significant controversies involving AI, in November Altman was released from his duties as CEO of OpenAI, only to be reinstated 12 days later amidst a drama that left several questions that, to date, remain unresolved.

On November 22, just days after Altman’s temporary ousting as the CEO of OpenAI, two people with knowledge of the situation told Reuters that “several staff researchers wrote a letter to the board of directors,” which had reportedly warned about “a powerful artificial intelligence discovery that they said could threaten humanity,” the report stated.

In the letter addressed to the board, the researchers highlighted the capabilities and potential risks associated with artificial intelligence. Although the sources did not outline specific safety concerns, some of the researchers who authored the letter to OpenAI’s board had reportedly raised concerns involving an AI scientist team comprised of two earlier “Code Gen” and “Math Gen” teams, warning that the new developments that aroused concern among company employees involved aims to upgrade the AI’s reasoning abilities and ability to engage in scientific tasks.

In a surprising turn of events that occurred two days earlier on November 20, Microsoft announced its decision to onboard Sam Altman and Greg Brockman, the president of OpenAI and one of its co-founders, who had resigned in solidarity with Sam Altman. Microsoft said at the time that the duo was set to run an advanced research lab for the company.

Four days later Sam Altman was reinstated as the CEO of OpenAI after 700 of the company’s employees threatened to quit and join Microsoft. In a recent interview with Altman, he disclosed his initial response to his invitation to return following his dismissal, saying it “took me a few minutes to snap out of it and get over the ego and emotions to then be like, ‘Yeah, of course I want to do that’,” Altman told The Verge.

“Obviously, I really loved the company and had poured my life force into this for the last four and a half years full-time, but really longer than that with most of my time. And we’re making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI,” Altman said.

But the AI soap opera doesn’t stop there. On November 30, Altman announced that Microsoft would join OpenAI’s board. The tech giant, holding a 49 percent ownership stake in the company after a $13 billion investment, will assume a non-voting observer position on the board. Amidst all this turmoil, questions remained about what, precisely, the new Q* model is, and why it had so many OpenAI researchers concerned.

Keep reading

Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

Keep reading

China developing ‘brain warfare’ tech with devices that make you sleep and thought-controlled weapons

Two studies from China’s People’s Liberation Army indicate a focus on enhancing military capabilities, using advanced technology to achieve victory without traditional weapons.

These biological weapons aim to induce sleep, impair cognitive functions, reduce alertness, and affect decision-making, the report stated.

The report, “Warfare in the Cognitive Age: NeuroStrike and the PLA’s Advanced Psychological Weapons and Tactics,” was published earlier this month by The CCP Biothreats Initiative, a research group.

“The PLA is at the forefront of incorporating advanced technologies such as artificial intelligence, brain-computer interfaces and novel biological weapons into its military strategies,” the researchers stated.

‘In summary, the PLA’s integration of cutting-edge technologies such as AI, [brain-computer interfaces], and biological weapons into its military arsenal brings significant psychological dimensions to warfare, extending beyond their physical effects,’ according to the report.

Devices for cognitive warfare include anti-sleep glasses, designed to boost alertness. The report also reveals other weapons such as ‘soft-kill radio waves,’ utilizing electromagnetic energy to induce drowsiness in enemies.

There are also reports that China is working on weapons directly controlled by a soldier’s thoughts, enabling them to manipulate enemies.

Keep reading

THIS STARTUP IS DEVELOPING A FUSION PROPULSION DRIVE FOR DEEP SPACE TRAVEL THAT COULD REACH MARS IN JUST TWO MONTHS

Helicity Space, a startup founded in 2018, is developing a fusion drive poised to transform space travel. With a fresh round of investment, the company is developing a proof-of-concept for a fusion-powered propulsion system that can get from Earth to Mars in two months.

In a recent press release, the space-based startup recently secured $5 million in seed funding from Airbus Ventures, TRE Ventures, Voyager Space Holdings, E2MC Space, Urania Ventures, and Gaingels. 

Unlike traditional rockets that rely on chemical reactions, Helicity’s fusion drive operates on a magneto-inertial fusion method. This involves fusing two hydrogen isotopes into helium, releasing immense energy – ten million times more per unit mass than chemical fuels​​.

According to Helicity, the core technology behind their fusion drive efficiently converts electricity into plasma heating, using a unique approach to scale fusion conditions and directly produce thrust​​. Their method, distinct from conventional magnetic or inertial (laser) fusion, employs self-organized Taylor relaxation and magnetic reconnection physics, combined with a peristaltic magnetic compression scheme.

In very simple terms, the engine uses hot ionized plasma gas heated by magnetic fields that are constantly forced together to the point where they must then break apart. It is this seesaw of magnetic forces that generates vast amounts of energy, heating the plasma to the point where fusion occurs, forcing the nuclei so close that they overcome their electrostatic repulsion and fuse together. To simplify this even more, the energy created by that fusion is aimed out of the tailpipe of the Helicity Drive, and you generate a helluva lot of thrust.

So much so that it cuts the current seven or eight-month trip to Mars down to two, or the six-year trip to Jupiter down to just one.

Keep reading

AlphV ransomware site is “seized” by the FBI. Then it’s “unseized.” And so on.

The FBI spent much of Tuesday locked in an online tug-of-war with one of the Internet’s most aggressive ransomware groups after taking control of infrastructure the group has used to generate more than $300 million in illicit payments to date.

Early Tuesday morning, the dark-web site belonging to AlphV, a ransomware group that also goes by the name BlackCat, suddenly started displaying a banner that said it had been seized by the FBI as part of a coordinated law enforcement action. Gone was all the content AlphV had posted to the site previously.

Around the same time, the Justice Department said it had disrupted AlphV’s operations by releasing a software tool that would allow roughly 500 AlphV victims to restore their systems and data. In all, Justice Department officials said, AlphV had extorted roughly $300 million from 1,000 victims.

An affidavit unsealed in a Florida federal court, meanwhile, revealed that the disruption involved FBI agents obtaining 946 private keys used to host victim communication sites. The legal document said the keys were obtained with the help of a confidential human source who had “responded to an advertisement posted to a publicly accessible online forum soliciting applicants for Blackcat affiliate positions.”

“In disrupting the BlackCat ransomware group, the Justice Department has once again hacked the hackers,” Deputy Attorney General Lisa O. Monaco said in Tuesday’s announcement. “With a decryption tool provided by the FBI to hundreds of ransomware victims worldwide, businesses and schools were able to reopen, and health care and emergency services were able to come back online. We will continue to prioritize disruptions and place victims at the center of our strategy to dismantle the ecosystem fueling cybercrime.”

Within hours, the FBI seizure notice displayed on the AlphV dark-web site was gone. In its place was a new notice proclaiming: “This website has been unseized.” The new notice, written by AlphV officials, downplayed the significance of the FBI’s action. While not disputing the decryptor tool worked for 400 victims, AlphV officials said that the disruption would prevent data belonging to another 3,000 victims from being decrypted.

“Now because of them, more than 3,000 companies will never receive their keys.”

Keep reading

MACHINE LEARNING BREAKTHROUGH CREATES FIRST EVER AUTOMATED AI SCIENTIST

Carnegie Mellon University researchers have pioneered an artificially intelligent system, Coscientist, that can autonomously develop scientific research and experimentation. Published in the journal Nature, this non-organic intelligent system, developed by Assistant Professor Gabe Gomes and doctoral students Daniil Boiko and Robert MacKnight, is the first to design, plan, and execute a chemistry experiment autonomously. 

Utilizing large language models (LLMs) like OpenAI’s GPT-4 and Anthropic’s Claude, Coscientist demonstrates an innovative approach to conducting research through a human-machine partnership​​​​.

Coscientist’s design enables it to perform various tasks, from planning chemical syntheses using public data to controlling liquid handling instruments and solving optimization problems by analyzing previously collected data. Its architecture consists of multiple modules, including web and documentation search, code execution, and experiment automation, coordinated by a central module called ‘Planner,’ a GPT-4 chat completion instance. This structure allows Coscientist to operate semi-autonomously, integrating multiple data sources and hardware modules for complex scientific tasks​​.

“We anticipate that intelligent agent systems for autonomous scientific experimentation will bring tremendous discoveries, unforeseen therapies, and new materials,” the research team wrote in the paper. “While we cannot predict what those discoveries will be, we hope to see a new way of conducting research given by the synergetic partnership between humans and machines.”

The system’s capabilities were tested across different tasks, demonstrating its ability to precisely plan and execute experiments. For instance, Coscientist outperformed other models like GPT-3.5 and Falcon 40B in synthesizing compounds, particularly complex ones like ibuprofen and nitroaniline. This highlighted the importance of using advanced LLMs for accurate and efficient experiment planning​​.

A key aspect of Coscientist is its ability to understand and utilize technical documentation, which has always been a challenge in integrating LLMs with laboratory automation. By interpreting technical documentation, Coscientist enhances its performance in automating experiments. This capability was extended to a more diverse robotic ecosystem, such as the Emerald Cloud Lab (ECL), demonstrating Coscientist’s adaptability and potential for broad scientific application​​.

Keep reading