Several researchers have pointed to a drug deal that took place in 1971 or 1972 as the first online transaction made on the internet. As the legend goes, Stanford students using Arpanet accounts at Stanford University’s Artificial Intelligence Laboratory, used the network to sell some cannabis to other tech students at the Massachusetts Institute of Technology (MIT).
The historic event was detailed in two books, “What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry”, which was released in 2005 by John Markoff, and “The Dark Net,” which was released more recently by Jamie Bartlett.
In “What The Dormouse Said,” Markoff Writes: “In 1971 or 1972, Stanford students using Arpanet accounts at Stanford University’s Artificial Intelligence Laboratory engaged in a commercial transaction with their counterparts at Massachussetts Institute of Technology. Before Amazon, before eBay, the seminal act of e-commerce was a drug deal. The students used the network to quietly arrange the sale of an undetermined amount of mariju**a.“
Bartlett gives a nearly identical description in his book ‘The Dark Net’, which discusses online marketplaces that have made headlines in recent years.
The Silk Road, which launched in 2011, was the first truly anonymous online marketplace, and it quickly became a target for politicians and law enforcement because of the large volume of drugs that were being sold through the site. On the Silk Road, drug users and vendors were able to trade anonymously using Bitcoin, making it one of the first major commerce platforms to adopt the cryptocurrency. The website’s alleged creator, Ross Ulbricht, is currently serving a double life sentence with no possibility of parole for operating the online marketplace.
One important point that was heavily overlooked by the media during the Ulbricht trial was the fact that the Silk Road actually made the world a safer place by undermining prohibition. Even though drugs are illegal, large numbers of people still use them on a regular basis and these people are often put in dangerous situations because of these prohibitions.
In a searingly hot afternoon at a campuslike new science park in Beer Sheva, southern Israel, I watched as a group of bright, geeky teenagers presented their graduation projects. Parents and uniformed army personnel milled around a windowless room packed with tables holding laptops, phones or other gadgets. There was excited chatter and a pungent smell of adolescent sweat. This was a recent graduation ceremony for Magshimim (which roughly translates as “fulfilment”), the three-year after-school programme for 16 to 18-year-old students with exceptional computer coding and hacking skills. Magshimim serves as a feeder system for potential recruits to Unit 8200, the Israeli military’s legendary high-tech spy agency, considered by intelligence analysts to be one of the most formidable of its kind in the world. Unit 8200, or shmone matayim as it’s called in Hebrew, is the equivalent of America’s National Security Agency and the largest single military unit in the Israel Defence Forces. It is also an elite institution whose graduates, after leaving service, can parlay their cutting-edge snooping and hacking skills into jobs in Israel, Silicon Valley or Boston’s high-tech corridor. The authors of Start-up Nation, the seminal 2009 book about Israel’s start-up culture, described 8200 and the Israeli military’s other elite units as “the nation’s equivalent of Harvard, Princeton and Yale”.
From self-driving cars, to digital assistants, artificial intelligence (AI) is fast becoming an integral technology in our lives today. But this same technology that can help to make our day-to-day life easier is also being incorporated into weapons for use in combat situations.
Weaponised AI features heavily in the security strategies of the US, China and Russia. And some existing weapons systems already include autonomous capabilities based on AI, developing weaponised AI further means machines could potentially make decisions to harm and kill people based on their programming, without human intervention.
Countries that back the use of AI weapons claim it allows them to respond to emerging threats at greater than human speed. They also say it reduces the risk to military personnel and increases the ability to hit targets with greater precision. But outsourcing use-of-force decisions to machines violates human dignity. And it’s also incompatible with international law which requires human judgement in context.
Indeed, the role that humans should play in use of force decisions has been an increased area of focus in many United Nations (UN) meetings. And at a recent UN meeting, states agreed that it’s unacceptable on ethical and legal grounds to delegate use-of-force decisions to machines – “without any human control whatsoever”.
But while this may sound like good news, there continues to be major differences in how states define “human control”.
If successful a very new universe is going to be exposed – modifying completely not only the physics books but the philosophy books too.
It is even probable that gravity from our own universe may “transfer” into this parallel universe, researchers at the LHC say. The experiment is assured to accentuate alarmist critics of the LHC, many of whom initially warned the high energy particle collider would start the top of our universe with the making a part of its own. But up to now Geneva stays intact and securely outside the event horizon.
SpaceX and the Pentagon just signed a contract to jointly develop a new rocket that can launch into space and deliver up to 80 tons of cargo and weaponry anywhere in the world — in just one hour.
Tests on the rocket are expected to begin as early as next year, Business Insider reports. It’s expected to shuttle weapons around the world 15 times faster than existing aircraft, like the US C-17 Globemaster.
“Think about moving the equivalent of a C-17 payload anywhere on the globe in less than an hour,” General Stephen Lyons, head of US Transportation Command said at a Wednesday conference.
An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.
The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”
Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”
A top government watchdog group obtained 165 pages of records from the Food and Drug Administration (FDA) revealing that the FDA under the Obama administration paid $96,370 for “fresh and never frozen” tissue from first and second trimester aborted fetuses to use in creating ‘humanized mice’ for ongoing research.”
The purchases included eight contracts signed “between 2012 and 2018” with the non-profit group Advanced Bioscience Resources (ABR) to acquire the baby body parts.
ABR was the subject of numerous past criminal referrals issued by House and Senate committees, stated the watchdog. Lawmakers are also investigating whether Planned Parenthood “or any other entity was illegally profiting from the handling of fetal tissue from aborted babies,” the press release stated.
A new technology being used in shops in Thailand and other countries to enforce coronavirus restrictions conducts a face scan to check if the customer is wearing a mask and refuses entry if they don’t have one.
“My local shops in Thailand. In 2 seconds scans my temperature and to see if wearing mask. Doors don’t open if not. 3 cases in 100+ days here. Removes awkward mask arguments for staff as well,” tweeted Niall Harbison.
The system resembles something you’d see as part of airport security. The customer walks up to a screen which displays the words ‘Face Detector’. The system then scans to check if the customer is wearing a mask before allowing them to pass through a barrier.