Meta-Facebook Manager of Community Development Staffer Caught In Hotel Child Sex Sting Operation

A Manager of Community Development at Meta/Facebook was busted in a Chris Hansen-style pedophile sting operation by independent journalists.

Andy Ngo reported,

Exclusive: High-level Facebook staffer allegedly caught in amateur child sex sting operation

A live stream video posted on YouTube on Feb. 16 by “Predator Catchers Indianapolis” purports to show Meta/Facebook Manager of Community Development Jeren Andrew Miles, of Palm Springs, Calif., being caught in a child sex sting in Columbus, Ohio.

Miles, 35, allegedly communicated sexually explicit texts with a person who said they were a 13-year-old boy. He allegedly made plans for the boy to meet him at Le Meridien Columbus hotel, which is how and where the “Predator Catchers” group interviewed him.

Miles serves on the board of directors for LGBTQ+ group, Equality California. He has since completely deleted his social media accounts on LinkedIn, Twitter, Instagram and Facebook. Miles previously worked as the Director of Community Affairs for Lyft, according to an archive on LinkedIn.

Keep reading

DeepMind Has Trained an AI to Control Nuclear Fusion

THE INSIDE OF a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

Keep reading

YouTube CEO wants governments pass laws to “have more control over online speech”

Susan Wojcicki, the CEO of YouTube, has acknowledged that the platform’s policy of censoring legal content that it deems to be “harmful” is controversial and urged governments to step in and pass stronger speech laws.

Wojcicki made the comments in an interview with the Hamburg-based independent broadcaster TIDETVhamburg where she was asked about how the platform navigates the “minefield” or complying with national laws while also keeping advertisers happy and users interested.

“We work around the globe, and you’re right, certainly there are many different laws and many different jurisdictions, and we…enforce the laws of the various jurisdictions around speech or what’s considered safe or not safe,” Wojcicki said.

She added that there’s been little controversy when YouTube enforces these laws. Instead, the controversy arises when YouTube decides to censor speech that it deems to be “harmful” but “not illegal,” particularly in relation to COVID.

“What has been the controversial part is when there is content that would be deemed as harmful but yet is not illegal,” Wojcicki said. “An example of that, for example, would be COVID. I’m not aware of there being laws by governments saying around COVID in terms of not being able to debate the efficacy of masks or where the virus came from or the right treatment or proposal but yet there was a lot of pressure and concern about us distributing misinformation that went against what was the standard and accepted medical knowledge. And so this category of harmful but…legal has been, I think, where most of the discussion has been.”

Despite admitting that the way YouTube censors legal content is controversial, Wojcicki pushed for stronger speech laws that would accommodate this censorship.

“Our recommendation, if governments want to have more control over online speech is to pass laws to have that be very cleanly and clearly defined so that we can implement it,” Wojcicki said.

Keep reading

Google pulls ads on meteorologist tracking climate for ‘unreliable and harmful claims’

A meteorologist who worked on climate studies at NASA says satellite observations show the Earth is warming at the “bottom end” of climate models – but Google has demonetised his website.

Dr Roy Spencer said Google was “not specific about what claims” led to the decision.

“What their website tells me is that all of my whole website basically is unreliable and harmful claims,” he said.

“If people like myself don’t buy into the narrative that global warming is not only occurring but it’s going on at a catastrophic rate and we have to do something about it, if you don’t go along with that narrative, you get thrown under the bus.”

Keep reading

Here Are All The Times The Biden Administration Has Called For Tech Censorship

The Biden administration has frequently urged social media and technology companies to censor users, stressing the perceived dangers of “misinformation.”

White House press secretary Jen Psaki drew criticism after she said Tuesday that Spotify and other social media companies need to be “doing more” to stop the spread of “mis- and disinformation” on their platforms, echoing calls made in July for Facebook to remove posts containing vaccine misinformation. Psaki’s comments were in response to a question regarding backlash toward podcast host Joe Rogan, with whom Spotify has an exclusive contract, for hosting two guests who were skeptical of the COVID-19 vaccine’s safety and efficacy.

“Our hope is that all major tech platforms and all major news sources for that matter be responsible and be vigilant,” Psaki said.

Keep reading

Google and Yelp remove business reviews that criticize vaccine passport rules

Google has explained how it keeps user reviews on Google Maps free from “abuse” in a new blog post, that also mentions moderating comments from people criticizing businesses for forcing masks or vaccine passports.

Meanwhile, Yelp also admitted removing tens of thousands of reviews in 2021 for violating its policies and similarly removed reviews that mentioned vaccine passports.

Sites that allow user reviews such as Yelp and Google Maps have updated their moderation policies because some businesses have been hit with bad reviews for implementing Covid-related measures.

Keep reading

YouTube’s new censorship tactic is to limit streams that are too popular

YouTube has a brand new censorship tactic that appears to be affecting small creators – and one that, on the face of it, makes no sense.

Several livestreams posted on Google’s platform last weekend by truckers protesting in Canada have had their audience limited. The reason given to viewers trying to access the videos? They were too popular.

“Video unavailable: This video is popular! Due to limited creator history, we’re limiting the number of viewers,” YouTube’s message reads, and then advises visitors to subscribe to the channel “to help this creator reach a broader audience.”

No word, though, on whether that would make the videos available to that broader audience, or if YouTube would come up with yet more “censorship gymnastics” while trying to suppress content it might not approve of.

Some incredulous Twitter users reacted by saying they were waiting for confirmation that the message was “a thing and not a meme.”

However, it definitely is a thing – and it’s not hard to see how YouTube would disapprove of the particular streams from the Ottawa protests, since they were organized by truckers opposed to Covid vaccine mandates. Reports mention that the giant platform limited viewership of at least two creators both livestreaming from the “Trucker Freedom Convoy” events.

Keep reading

Graphika: The Deep State’s Beard for Controlling the Information Age

Graphika is the toast of the town. The private social-media and tech-intelligence agency that tracks down bots and exposes foreign influence operations online is constantly quoted, referenced and profiled in the nation’s most important outlets. For example, in 2020, The New York Times published a fawning profile of the company’s head of investigations, Ben Nimmo. “He Combs the Web for Russian Bots. That Makes Him a Target,” ran its headline, the article presenting him as a crusader risking his life to keep our internet safe and free. Last year, business magazine Fast Company labeled Graphika as among the 10 most innovative companies in the world.

There is no doubt that Graphika leans into this cool and dynamic corporate image. From its beginnings in 2013, the company has expanded to employ dozens of people at its trendy Manhattan office. Describing themselves as “cartographers of the internet age,” the company puts out investigation after investigation about foreign influence operations online, especially concentrating on RussianChinese or Iranian attempts to manipulate social media. A layperson could certainly be blinded by its science and impressed by the complex and innovative graphs and charts. Yet when it comes to similar but far larger U.S. government programs, the intelligence and analysis agency is silent.

Keep reading

Censorship By Algorithm Does Far More Damage Than Conventional Censorship

People make a big deal any time a controversial famous person gets removed from a major social media platform, and rightly so; we cannot allow such brazen acts of censorship to become normalized. The goal is to normalize internet censorship on every front, and the powerful will push for that normalization to be expanded at every opportunity. Whether you dislike the controversial figure being deplatformed on a given day is entirely irrelevant; it’s not about them, it’s about expanding and normalizing internet censorship protocols on monopolistic government-tied speech platforms.

But far, far more consequential than overt censorship of individuals is censorship by algorithm. No individual being silenced does as much real-world damage to free expression and free thought as the way ideas and information which aren’t authorized by the powerful are being actively hidden from public view, while material which serves the interests of the powerful is the first thing they see in their search results. It ensures that public consciousness remains chained to the establishment narrative matrix.

It doesn’t matter that you have free speech if nobody ever hears you speak. Even in the most overtly totalitarian regimes on earth you can say whatever you want alone in a soundproof room.

That’s the biggest loophole the so-called free democracies of the western world have found in their quest to regulate online speech. By allowing these monopolistic megacorporations to become the sources everyone goes to for information (and even actively helping them along that path as in for example Google’s research grants from the CIA and NSA), it’s possible to tweak algorithms in such a way that dissident information exists online, but nobody ever sees it.

Keep reading

Instagram says it will cut the reach of posts that are “likely” to contain “hate speech”

Instagram is introducing more vaguely defined restrictions on its users, this time acting “proactively” to lower Feed posts and Stories that “may” contain bullying or hate speech, or those which “may” encourage violence – as well as content that is “potentially upsetting.”

In a blog post, Facebook’s platform said that this means the already existing policy of reducing the reach of posts determined to contain misinformation by third-party “fact-checkers” – and all posts from accounts that are said to have shared misinformation “repeatedly” – is being expanded.

It is Instagram’s “systems” that will be tasked with making the distinction between what “may” or is “likely” or “potentially” contains hate speech and represents bullying. The blog post explains that (algorithms) will make these decisions by comparing captions – if a caption is similar to another that was already found to be violating the platform’s rules, then the post will be pushed down Feeds and Stories.

Instagram also said that the new policy, that smacks of shadow-banning, affects individual posts and not accounts themselves, and that posts Instagram actually thinks break its rules, rather than suspect them to, will be removed, as before.

Keep reading