Inside the very sinister world of Lil Tay: Influencer and rapper faced fury over vulgar N-word videos at age NINE – before mysteriously VANISHING amid claims she’d been ABUSED by her dad and ‘exploited’ by brother Jason Tian

14-year-old influencer Lil Tay was involved in a bizarre fake death scandal this week – which marked a wild twist to an already tragic story that was filled with rumors of exploitation and allegations of abuse.

Lil Tay, whose real name is Tay Tian, was skyrocketed into the spotlight when she was just nine years old, after she began posting a series of vulgar videos online, which showed her using the N-word and getting into fights with strangers. 

She was hailed as one of the internet’s biggest rising stars in 2017, and quickly racked up millions of followers – but her bizarre clips left some people on the web concerned for her wellbeing. 

Soon, rumors began to swirl that she was being forced to partake in the videos by her family, which were only fueled when footage that seemingly showed her then-16-year-old brother, Jason Tian, coaching her on what to say made its way around the web. 

Lil Tay sparked more concern in July 2018, when she shared a chilling message asking for help to her Instagram Stories, only to suddenly erase all of her videos hours later.

Four months later, an anonymous person started sharing harrowing posts to her account in which they claimed that her ‘abusive’ dad, Chris Hope, was trying to take her away from her mother. 

Chris denied the allegations, claiming that he was actually protecting her. He insisted that he was trying to ‘stop things that he felt were dangerous to her physical and mental health and to her future’ and separate her from people who were motivated by ‘the possibility of making money off of her.’

Then, three years later, her brother issued a desperate plea for help through a GoFundMe page, in which he claimed that Lil Tay was ‘fighting for her life, future and freedom’ after being ‘physically and mentally’ abused by her ‘absentee’ father, who wanted to ‘gain control’ of her and her earnings.

He alleged that Chris had stolen millions of dollars from his sister, took control of her career, forced her into silence, and was now trying to gain full custody over her.

Then, on Wednesday, someone shared a post to Lil Tay’s Instagram claiming that both she and Jason had passed away, explaining that the ‘circumstances surrounding’ their deaths are ‘still under investigation.’

Hours later, however, Lil Tay told TMZ that she was still alive and that her account had been ‘hacked.’ 

As the young star is shrouded in controversy once again, FEMAIL went ahead and rounded up everything we know about her – from her scandalous rise to fame and why she suddenly disappeared from the spotlight five years ago to what she said about her father and what we know about her recent fake death.

Keep reading

Disinformation And Censorship, 1984–2023

Orwell, again. 1984 seems written for the Biden era. Underlying it all is the concept of disinformation, the root of propaganda and mind control. So it is in 2023. Just ask FBI Director Chris Wray. Or Facebook.

George Orwell’s novel explores the concept of disinformation and its role in controlling and manipulating society. Orwell presents a dystopian future where a totalitarian regime, led by the Party and its figurehead Big Brother, exerts complete control over its citizens’ lives, including their thinking. The Party employs a variety of techniques to disseminate disinformation and maintain its power. One of the most prominent examples is the concept of “Newspeak,” a language designed to restrict and manipulate thought by reducing the range of expressible ideas. Newspeak aims to replace words and concepts that could challenge or criticize the Party’s ideology, effectively controlling the way people think and communicate (in our own time and place, think of “unhoused,” “misspoke,” LGBTQIAXYZ+, “nationalist,” “terrorist”).

Orwell also introduces the concept of doublethink, which refers to the ability to hold two contradictory beliefs simultaneously and to accept them both as true. This psychological manipulation technique allows the Party to control the minds of its citizens and make them believe in false information or embrace contradictory ideas without questioning (think mandating masks that do not prevent disease transmission). The Party in 1984 alters historical records and disseminates false information through the Ministry of Truth. This manipulation of historical events and facts aims to control the collective memory of the society in a post-truth era, ensuring that the Party’s version of reality remains unquestioned (think war in Ukraine, Iraq, El Salvador, Vietnam, all to protect our freedom at home.)

Through these portrayals, Orwell highlights the dangers of disinformation and its potential to distort truth, manipulate public opinion, and maintain oppressive systems of power. The novel serves as a warning about the importance of critical thinking, independent thought, and the preservation of objective truth in the face of disinformation and propaganda.

Disinformation is bad. But replacing disinformation with censorship or replacement with other disinformation is worse. 

1984 closed down the marketplace of ideas. So for 2023.

Keep reading

Democrat State Attorneys General File Brief In Support of Biden Censorship Power

In a move that underscores the unceasing tension between free speech and the control of information online, 20 Democratic state attorneys general have made appeals through federal court to restore their power in urging social media entities to censor user content.

Headlined by New York Attorney General Letitia James, the collective is adamant that federal court decisions are hindering their capability to prevent the circulation of misleading information.

July 4 saw US District Judge Terry Doughty issue a directive that greatly restrains government officials’ influence over social media moderation, after there was enough evidence already presented to show possible First Amendment violations.

Stemming from a lawsuit filed in May 2022 by Republican attorneys in Louisiana and Missouri, the verdict argued that both the presiding Biden administration had unjustly pressured social media platforms into suppressing posts perceived as potential triggers for vaccine hesitancy during the COVID-19 crisis or destabilizers for electoral processes.

This pursuit for moderation by government officials, the suit asserted, unjustly infringed upon the First Amendment right to free speech. In particular, these assertions were aimed at tech conglomerates like Meta’s Facebook and Google’s YouTube, accused of commencing the limitation of information dissemination allegedly deemed misleading circa 2019.

Currently held in suspension due to an appeal by the Biden administration, the order, should it be reactivated by the 5th Circuit, will prevent government departments, including the likes of the Department of Health and Human Services and the Federal Bureau of Investigation, from communication with social media companies for the removal or suppression of content considered as protected free speech under the First Amendment.

Keep reading

General, West Point Professor Ran Shadow Investigation to Hunt Down and Silence Military Whistleblower for Mean Tweets

An Army three-star general and a West Point associate professor used government resources in an unofficial investigation to hunt down and punish an anonymous active-duty whistleblower who criticized Army leaders and the Biden administration on social media, according to private emails and text messages obtained exclusively by Breitbart News.

Army Training and Doctrine Command Deputy Commander Lt. Gen. Maria Gervais and Army Maj. Jessica Dawson — who is also an “information warfare research scientist” at the Army Cyber Institute — used their official authority and access to government resources to track down the whistleblower and get him identified publicly and punished by his chain of command.

Despite the lack of evidence, they repeatedly accused the whistleblower of being a “counterintelligence” and “insider threat” in a seeming effort to trigger action by Army Criminal Investigative Division (CID) — an independent federal law enforcement agency with expansive powers designed to investigate serious felonies.

Pat Wier, a civilian defense attorney and Navy reservist, said a CID investigation would require an assumption or designation of a serious threat and called Gervais and Dawson’s trumping up of accusations for exercising free speech rights “wrongful.”

“His alleged actions did not rise to the level of a serious crime, or any crime at all,” he said.

Rather, it appeared to be an attempt by rogue military officials seeking to use the levers of government to punish political dissent.

Keep reading

This Disinformation Is Just for You

IT’S NOW WELL understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.

When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.

“Generative AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”

Rather than producing just a handful of articles a day,  Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”

Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign. Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.

“You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.

Keep reading

Senators Introduce Bipartisan Bill to Regulate Online Speech

Senators Elizabeth Warren (D-MA) and Lindsey Graham (R-SC) have introduced a bill to create a new federal government commission overseeing online communication. The legislation is presented as consumer protection but grants new government authorities to police speech on the internet. 

“For too long, giant tech companies have exploited consumers’ data, invaded Americans’ privacy, threatened our national security, and stomped out competition in our economy,” said Warren. “This bipartisan bill would create a new tech regulator and makes clear that reining in Big Tech platforms is a top priority on both sides of the aisle.”

“For years, I have been trying to find ways to empower consumers against Big Tech,” Graham claimed. “I have heard too many stories from families who feel helpless in the face of Big Tech. … The creation of a regulatory commission to oversee Big Tech is the first step in a long journey to protect American consumers from the massive power these companies currently wield.”

The bill will establish a Digital Consumer Protection Commission that will designate some websites as “dominant platforms.” It appears those sites will be in the crosshairs of the new commission as the legislation instructs the new agency “to intentionally avoid having the platform meet the qualifications for designation as a dominant platform.”

The “dominant platforms will be required to inform the government of their content moderation policies. The bill will require designated companies to “make publicly available, through clear and conspicuous disclosure, the dominant platform’s terms of service, which shall include the criteria the operator employs in content moderation practices.”

Keep reading

New Files Show Biden Admin Forced Facebook To Censor “True Information” On Vaccine Side Effects

Chairman of the House Judiciary Committee Jim Jordan released a second batch of files Friday showing that the Biden administration was forcing Facebook to censor content relating to COVID vaccines, including what employees there described as “true information”.

Jordan noted that Facebook knew it was removing “humorous or satirical content that suggests the vaccine isn’t safe,” as well as “true information about the side effects.” 

Jordan also noted that Facebook employees were annoyed at being made to take down content they knew was accurate, describing the administration’s definition of misinformation as “completely unclear” and noting that the White House was using “untested assumptions” to demand censorship.

“It also just seems like when the vaccination campaign isn’t going as hoped, it’s convenient for them to blame us,” one employee noted.

Another agreed, responding “This seems like a political battle that’s not fully grounded in facts, and it’s frustrating.”

Keep reading

Facebook Felt “Pressure” From “Outraged” Biden White House To Remove Posts

Calling somebody a member of the “Disinformation Dozen” – there’s a nasty piece of propaganda work, trying to discredit political opponents in a time of crisis.

Worse still, it wasn’t happening out in the open, on actual social networks, but behind the curtains of government/Big Tech collusion.

And now – with the new US election looming large already – it’s probably prudent, if your party/political affiliation was on “the wrong” end of the stick the last time, to try to fight against this repeating “early, and often.”

The term “Disinformation Dozen” refers to a (conspiracy, to all effects) theory that there were only 12 people in the world (… right?) who were behind Covid vaccine criticism back in 2021.

And what a difference, strictly scientifically speaking, two years make…

But now, it’s time to face the “political piper,” as it were. Documents have emerged to show that Facebook at the time played along with the “Disinformation Dozen” game related not only to original posts – but also memes – and went to the significant no doubt trouble of changing its entire policy to suit the narrative rendered under pressure from Biden’s Surgeon General.

But as always, eventually, what does a Big Tech giant do? Appease the “permanent power center” supported or not by some current White House.

And so, we are now learning that Meta wasn’t thrilled about it – but “felt” it had to play along and acquiesce to government pressure.

Keep reading

Sexy AI-generated influencers are silently swarming social media with fake names and backstories with one goal: Con desperate men

With the vast amount of filters and photo editing apps at users’ disposal, it can be hard to tell what’s real and fake on social media these days.

But a DailyMail.com probe has found a budding world of scantily-clad AI influencers who are conning desperate men out of money.

Earlier this week, a 19-year-old blonde bombshell known as Milla Sofia made headlines when it emerged she was artificially generated.

Since then, we’ve uncovered dozens of digital influencers on InstagramTikTok and Twitter who often have fake names and elaborate backstories, jobs and interests. AI-generated photos show them on fake vacations. 

These rising stars – who combined have hundreds of thousands of followers – are receiving admiration and cash from real men. 

Andrea is a brown-haired beauty with more than 30,000 followers on Twitter/X who comment on her lewd photos. She includes her PayPal account details in her bio and offers nude images for subscribers on Patreon.

Andrea’s Patreon offers plan options to chat with her, and for $300 a month, she will ‘basically be your online girlfriend.’ 

The human creators of these AI influencers are unknown faces on the web, only pushing out content to likely live a life they had only dreamed of. 

Milla Sofia is an aspiring fashion model with a portfolio of portraits showing her tanning in Bora Bora, taking in the views in Greece and working in a corporate office.

She has made headlines this week after Futurism uncovered her existence online. 

Some AI-generated personas, like Miquela Sousa, have been revealed to be marketing stunts that turned into gold mines.

Miquela starred in advertising campaigns for major brands such as Prada and Balenciaga, was interviewed by Vogue, and was named one of Time magazine’s 25 most influential people on the internet.

The forever 19-year-old is the brainchild of Trevor McFedries and Sara DeCou, founders of the fashion brand BRUD, who unleashed the AI teen in 2019.

At first, her creators liked to tease her followers and capitalize on the confusion. ‘Is she human?’ asked one perturbed Instagram user. ‘Why do you look like a doll?’ demanded another.

‘She’s some kind of cute mannequin,’ someone claimed. ‘It’s clearly a robot,’ stated another.

Eventually, fans were told the truth, or part of it at least. ‘I’m not a human being,’ Miquela confessed on her page. She said her ‘hands’ were shaking as she ‘wrote’ the post. 

Keep reading

Stanford researchers find Mastodon has a massive child abuse material problem

Mastodon, the decentralized network viewed as a viable alternative to Twitter, is rife with child sexual abuse material (CSAM), according to a new study from Stanford’s Internet Observatory (via The Washington Post). In just two days, researchers found 112 instances of known CSAM across 325,000 posts on the platform — with the first instance showing up after just five minutes of searching.

To conduct its research, the Internet Observatory scanned the 25 most popular Mastodon instances for CSAM. Researchers also employed Google’s SafeSearch API to identify explicit images, along with PhotoDNA, a tool that helps find flagged CSAM. During its search, the team found 554 pieces of content that matched hashtags or keywords often used by child sexual abuse groups online, all of which were identified as explicit in the “highest confidence” by Google SafeSearch.

There were also 713 uses of the top 20 CSAM-related hashtags across the Fediverse on posts that contained media, as well as 1,217 text-only posts that pointed to “off-site CSAM trading or grooming of minors.” The study notes that the open posting of CSAM is “disturbingly prevalent.”

One example referenced the extended mastodon.xyz server outage we noted earlier this month, which was an incident that occurred due to CSAM posted to Mastodon. In a post about the incident, the sole maintainer of the server stated they were alerted to content containing CSAM but notes that moderation is done in his spare time and can take up to a few days to happen — this isn’t a giant operation like Meta with a worldwide team of contractors, it’s just one person.

While they said they took action against the content in question, the host of the mastodon.xyz domain had suspended it anyway, making the server inaccessible to users until they were able to reach someone to restore its listing. After the issue was resolved, mastodon.xyz’s administrator says the registrar added the domain to a “false positive” list to prevent future takedowns. However, as the researchers point out, “what caused the action was not a false positive.”

“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,” David Thiel, one of the report’s researchers, said in a statement to The Washington Post. “A lot of it is just a result of what seems to be a lack of tooling that centralized social media platforms use to address child safety concerns.”

Keep reading