Open Table is spying on you — and ratting out your bad habits like being late, canceling to restaurants

What happens at the dining table no longer stays at the dining table.

If the city’s servers suddenly always seem to know your go-to drink order, or how you always order extra croutons on your salad – you’re not going crazy.

Reservation platform OpenTable is spying on its users and compiling personal information on guests to share with restaurants, both good and bad, from wine preferences to whether they cancel a same-day reservation.

This allows eateries to highlight things to your preference, save preferred seating or — if your AI notes reveal poor etiquette — cancel your reservation altogether, sources tell The Post. 

“It’s not just spending habits or if they like Coca-Cola or bottled water. Now, we’re getting a taste of what a diner’s behavior at a restaurant is like: If they’re a late canceler, if they leave reviews a lot,” Shawn Hunter, a general manager for Sojourn Social on the Upper East Side told The Post of the feature he first noticed two weeks ago.

Keep reading

China plans to block rare earth shipments to US military contractors: report

China is reportedly crafting a plan to block the US military from getting shipments of rare earth magnets – even as it eases restrictions on shipments to US companies making electronics and other consumer goods.

Beijing has repeatedly used its near-monopoly over rare earth metals – crucial to make everything from iPhones to military hardware like F-35 fighter jets and drones – in tense tariff talks with the Trump administration.

Beijing is planning a “validated end-user” system that fast-track shipments for approved civilian firms in the US, the Wall Street Journal reported, citing sources familiar with the plan. It would allow China’s President Xi Jinping to keep his promise to President Trump about easing exports while cutting out military contractors.

If it is enacted, the plan could cause ongoing headaches for US companies that make “dual use” products or have both civilian and military clients, such as certain automakers and aerospace companies, according to the report.

The White House did not immediately return a request for comment.

Keep reading

Is Saudi Arabia The Most Technocratic Nation On Earth?

Behold, the “Sky Stadium” announced by the Kingdom of Saudi Arabia as the latest jazzy edition to their utopian “smart city” project known as The Line or NEOM — an $8-trillion, 105-mile-long megastructure composed of two 1640-foot-high mirrored slabs that enclose a creamy nougat center of jungly foliage and water features integrated with apartments, offices, schools, and (of course) shopping. It’s completely insane, you understand. The Line was first featured on this site in August 2022. Three years on, the project is buckling under the weight of its psychotic grandiosity.

The 46,000-seat Sky Stadium will be perched 1,150-feet up and is slated to be completed in time for the 2034 Fifa football (soccer) World Cup. The initial A-I generated renderings at the top of the page show it suspended on a skyscraper above a sprawling city, but it is actually designed to be “nested” somewhere between the two slabs of The Line.

First announced in 2017 as part of Crown Prince Mohammed bin Salman’s Vision 2030 diversification push, The Line was envisioned as a futuristic “cognitive city” with vertical urbanism, AI integration, 100-percent renewable energy (solar power), and 95-percent land preservation of its barren surroundings in the Tabuk province, “for nature.”

Dunno about you, but I’d be a little nervous about watching a soccer game 1,150-feet above the desert floor. Sounds like a super-gigantic version of one of those Sky Bridge” failures of the 1990s, where a mere hundred drunken people swilling margaritas collapse a hanging architectural folly in a Shopping Mall. We’ll stand by for the halftime show there.

Keep reading

Satellite Communications Breakthrough Could Pave the Way Toward ‘Quantum Energy’

Researchers at the University of Technology Sydney (UTS) have developed a new method for sending once-impossible Earth-to-space quantum light transmissions, which could enable ultra-secure satellite communications in the future.

The technology produces a beam of entangled light particles, a feat previously achieved only in the space-to-ground direction. The UTS team revealed their advances in a paper recently published in Physical Review Research.

Quantum Communications

China has been operating space-based quantum communications for almost a decade, beginning with the Micius satellite in 2016. Subsequently, the Jinan01 satellite, launched this year, established a 12,900-kilometer quantum link connecting China to South Africa.

“Current quantum satellites create entangled pairs in space and then send each half of the pair down to two places on Earth – called a ‘downlink, ‘” said co-author Professor Alexander Solntsev. “It’s mostly used for cryptography, where only a few photons (particles of light) are needed to generate a secret key.”

In previous estimates, scientists believed that moving in the opposite direction would be impossible, as signal loss, interference, and scattering would degrade the uplink. Despite these issues, ground station transmitters have many advantages over satellite-based ones. They have easier access to power, enabling them to produce stronger signals while also being far easier to maintain. 

By developing uplink technology, the researchers believe it will allow satellite-connected quantum computer networks, pushing them to overcome the challenges involved. Their first step was to create a testable concept that addressed the interference issues, which they did.

“The idea is to fire two single particles of light from separate ground stations to a satellite orbiting 500 km above Earth, travelling at about 20,000 km per hour, so that they meet so perfectly as to undergo quantum interference,” said co-author Professor Simon Devitt. “Is this even possible?”

Keep reading

Hollywood Producer Buys Israeli NSO Spyware Maker, Hires David Friedman to Sell Hacking Tools to U.S.

Hollywood producer Robert Simonds has purchased the Israeli spyware maker NSO Group to bring it under “American” control and hired Trump’s former Ambassador to Israel, David Friedman, to lobby the president to remove sanctions on the firm so they can sell their hacking tools to US law enforcement.

Though the company was sold to a consortium of alleged “Americans” led by Simonds, the NSO Group “said Sunday that it would continue to operate from Israel under the full regulatory authority of the Defense Ministry, as it expands its global footprint and seeks to resume operations in the US,” the Times of Israel reports.

From The Wall Street Journal, “Israeli Spyware Maker NSO Gets New Owners, Leadership and Seeks to Mend Reputation”:

TEL AVIV—NSO Group, the Israeli company behind Pegasus spyware, says a group of investors led by Hollywood producer Robert Simonds has acquired a controlling stake in the firm, which has named a former Trump official to lead an effort to restore its battered reputation.

The company, which has faced lawsuits and U.S. government sanctions since revelations that its technology was used to spy on political dissidents, human-rights advocates, journalists and American officials, declined to disclose the purchase price.

NSO’s new executive chairman, David Friedman, a former U.S. ambassador to Israel and onetime bankruptcy lawyer for President Trump, said he wants to use his ties to the Trump administration to help rebuild the company’s spyware business in the U.S.

“If the administration, as I expect they’ll be, is receptive to considering any opportunity that might keep Americans safer, it will consider us,” said Friedman, who splits his time between Florida and Israel.

This is naked influence peddling.

Keep reading

OpenAI asked Trump administration to expand Chips Act tax credit to cover data centers

recent letter from OpenAI reveals more details about how the company is hoping the federal government can support the company’s ambitious plans for data center construction.

The letter — from OpenAI’s chief global affairs officer Chris Lehane and addressed to the White House’s director of science and technology policy Michael Kratsios — argued that the government should consider expanding the Advanced Manufacturing Investment Credit (AMIC) beyond semiconductor fabrication to cover electrical grid components, AI servers, and AI data centers.

The AMIC is a 35% tax credit that was included in the Biden administration’s Chips Act.

“Broadening coverage of the AMIC will lower the effective cost of capital, de-risk early investment, and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the US,” Lehane wrote.

OpenAI’s letter also called for the government to accelerate the permitting and environmental review process for these projects, and to create a strategic reserve of raw materials — such as copper, alumimum, and processed rare earth minerals — needed to build AI infrastructure.

The company first published its letter on October 27, but it didn’t get much press attention until this week, when comments by OpenAI executives prompted broader discussion about what the company wants from the Trump administration.

At a Wall Street Journal event on Wednesday, CFO Sarah Friar said the government should “backstop” OpenAI’s infrastructure loans, though she later posted on LinkedIn that she misspoke:  “OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word ‘backstop’ and it muddied the point.”

Keep reading

Unbanked In A Connected World

Financial exclusion remains high in many parts of the world. In several countries, more than two out of three adults are unbanked, yet the majority own a mobile phone. This contrast between connectivity and financial access highlights both the persistent gaps in global inclusion and the massive opportunity to close them.

Created in partnership with Plasma, this graphic, via Visual Capitalist’s Jenna Ross, shows how ownership of financial accounts and mobile phones compares across countries. It’s part of our Money 2.0 series, where we highlight how finance is evolving into its next era.

The Unbanked Gap

In low- and middle-income economies, 84% of adults own a mobile phone, while 75% of people have financial accounts. This gap is much wider in some countries, especially in Africa and the Middle East.

For the most unbanked countries worldwide, here are the percentages of adults who own a financial account and those who own a mobile phone.

Keep reading

US Army Prepares Million Drone Acquisition To Secure Domain Dominance On Modern Battlefield

Nearly four months after U.S. Defense Secretary Pete Hegseth announced sweeping reforms aimed at achieving “drone domain dominance” by 2027, including a Pentagon-wide procurement overhaul led by the DOGEReuters reports that the U.S. Army is preparing to acquire at least one million drones over the next few years, marking one of the largest drone procurement cycles in the military service’s history. 

Learning from the modern battlefield in Ukraine, the Army plans a massive ramp-up in drones: purchasing at least a million drones over the next 2-3 years, with potential purchases of half a million or more per year thereafter.

This is a significant jump from today’s 50,000 drones per year procurement cycle, and comes as Russia and China have ramped up production of their own

U.S. Army Secretary Daniel Driscoll told Reuters that this new drone acquisition plan is a “big lift. But it is a lift we’re very capable of doing.” 

Here’s more from the report:

He spoke by phone during a visit to Picatinny Arsenal, where he described learning about experimentation with “net rounds,” defenses that capture a drone in nets, as well as new explosives and electromagnetic tools synched into weapon systems.

Driscoll and Picatinny’s top commander, Major General John Reim, spoke to Reuters about how the United States was taking lessons from Russia’s war in Ukraine, which has been characterized by drone deployments on an unprecedented scale.

Tiny, inexpensive drones have proven to be one of the most potent weapons in the Russia-Ukraine war, where conventional warplanes are relatively rare because of a dense concentration of anti-aircraft systems near front lines.

Ukraine and Russia each produce roughly 4 million drones a year, but China is probably able to produce more than double that number, Driscoll said.

Driscoll said his priority is getting the United States into a position where it can produce enough drones for any future war, stimulating domestic production of everything from brushless motors and sensors to batteries and circuit boards.

. . . 

We expect to purchase at least a million drones within the next two to three years,” Driscoll said.

President Trump’s June executive order to “unleash American drone dominance” calls for scaling up domestic production. However, the challenge lies in the fact that supply chains for critical components, such as brushless motors, sensors, batteries, and chips, remain concentrated in China and other Southeast Asian countries.

Drones are the future of warfare and America will come from behind to lead the way,” Sequoia partner Shaun Maguire stated over the summer on X. 

Keep reading

How Americans Want AI To Support Them

Three years after the release of ChatGPT in November 2022, there’s little remaining doubt that artificial intelligence, or AI as it’s usually referred to, will change our lives in many ways.

In some ways, it already has.

For example, people are searching the web differently, often relying on AI summaries instead of scrolling through and clicking on search results. That is, if they even use a search engine anymore and don’t just ask a large language model like ChatGPT instead.

The potential for AI tools to make our everyday lives a little easier here and there is virtually limitless, but what do people actually want AI to help them with?

Statista’s Felix Richter reports that, according to a recent survey by Statista Consumer Insights3 in 10 Americans want AI to act as a personal assistant to them, which it is already capable of.

Keep reading

Researchers: AI Safety Tests May Be ‘Irrelevant or Even Misleading’ Due to Weaknesses

Experts have discovered weaknesses in hundreds of benchmarks used to evaluate the safety and effectiveness of AI models being released into the world, according to a recent study.

The Guardian reports that a team of computer scientists from the British government’s AI Security Institute and experts from universities such as Stanford, Berkeley, and Oxford have analyzed more than 440 benchmarks that serve as a crucial safety net for new AI models. The study, led by Andrew Bean, a researcher at the Oxford Internet Institute, found that nearly all the benchmarks examined had weaknesses in at least one area, potentially undermining the validity of the resulting claims.

The findings come amidst growing concerns over the safety and effectiveness of AI models being rapidly released by competing technology companies. In the absence of nationwide AI regulation in the UK and US, these benchmarks play a vital role in assessing whether new AIs are safe, align with human interests, and achieve their claimed capabilities in reasoning, mathematics, and coding.

However, the study revealed that the resulting scores from these benchmarks might be “irrelevant or even misleading.” The researchers discovered that only a small minority of the benchmarks used uncertainty estimates or statistical tests to demonstrate the likelihood of accuracy. Furthermore, in cases where benchmarks aimed to evaluate an AI’s characteristics, such as its “harmlessness,” the definition of the concept being examined was often contested or ill-defined, reducing the benchmark’s usefulness.

The investigation into these tests has been prompted by recent incidents involving AI models contributing to various harms, ranging from character defamation to suicide. Google recently withdrew one of its latest AIs, Gemma, after it fabricated unfounded allegations of sexual assault against Sen. Marsha Blackburn (R-TN), including fake links to news stories.

In another incident, Character.ai, a popular chatbot startup, banned teenagers from engaging in open-ended conversations with its AI chatbots following a series of controversies. These included a 14-year-old in Florida who took his own life after becoming obsessed with an AI-powered chatbot that his mother claimed had manipulated him, and a US lawsuit from the family of a teenager who claimed a chatbot manipulated him to self-harm and encouraged him to murder his parents.

Keep reading