Woman’s insurance canceled after drone flies over her home

A woman had her insurance canceled after a drone flew over her home.

According to CBS News,a woman from Modesto, California told CBS Sacramento that her home insurance company of nearly 40 years dropped her coverage because of what it spotted with a drone.

Joan Van Kuren told CBS that she’s been renovating her home for more than three years, spending hundreds of thousands of dollars to have her driveway redone, her kitchen updated and bathroom renovated, among other changes and upgrades.

“It was amazing,” Van Kuren told CBS when asked how it felt to get all the projects finished. “It was wonderful because it took forever.”

Soon after, however, Van Kuren said she was notified by letter that her home insurance company of nearly four decades, CSAA, had dropped her. According to CBS, the company cited a substantial increase in hazards with clutter or unsanitary conditions, with the letter calling it an unacceptable hazard and liability exposure.

Van Kuren told the network’s reporters that she decided to contact CSAA about the decision.

“She said they flew a drone over the home,” Van Kuren told CBS. “It almost feels like someone’s looking in your windows, you know, when they tell you that they flew a drone over your home and looked at it. It’s like, whoa.”

According to CBS, CSAA told Van Kuren that there was debris on the left side of the house.

Keep reading

Are New-World-Order Elites Plotting to Use AI to ‘Deprogram’ So-Called Conspiracy Theorists?

Might the New World Order use biased, pre-manipulated artificial intelligence programs to try to “deprogram” those with unpopular opinions by persuading them that their logic does not compute?

A recent study on that subject underwritten by the John Templeton Foundation might give so-called conspiracy theorists one more thing to be paranoid about, according to Popular Science.

Critics have already sounded the alarm that leftist radicals in Silicon Valley and elsewhere were manipulating the algorithms used to train AI so that it automatically defaulted to anti-conservative biases.

The next step may be programming any verboten viewpoints into the realm of “conspiracy theory,” then having powerful computers challenge human users to a battle of logic that inevitably is stacked against them with cherrypicked data.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI,” attempted to counter the common view that some people will not change their minds, even when presented with facts and evidence.

Addressing the problem of “widespread belief in unsubstantiated conspiracy theories,” researchers postulated that conspiracy theories can, contrary to the scientific narrative, be countered by way of systematic fact-checking.

Among those theories tested were more traditional conspiracies such as those involving the assassination of John F. Kennedy or the possibility of alien landings that were known to the United States government.

Keep reading

Oprah Winfrey Says People Should Have “Reverence” for AI

In a recent interview with ABC News‘ Good Morning America, Oprah Winfrey shared details on her new ABC Primetime special “AI and the Future of Us: An Oprah Winfrey Special.”

In the interview, Winfrey shared that the first time she heard about AI and used it was when she had a conversation with Sam Altman, the CEO of Open AI.

Winfrey was initially skeptical of the new technology but changed her opinion and shared, “After Sam Altman was telling me about all the things that I could do, I was saying, ‘Okay, don’t be scared. ‘ You can get the ChatGPT app.’”

Later in the interview, Winfrey told ABC News’ Rebecca Jarvis, “I don’t think we should be scared; I think we should be disciplined, and we should honor it and have a reverence for what is to come.”

Keep reading

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.

The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.

Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.

Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.

In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.

“The time saving is pretty phenomenal,” Stanfield said.

As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.

Keep reading

Big Tech’s Latest “Fix” for AI Panic Is To Push a Digital ID Agenda

research paper, authored by Microsoft, OpenAI, and a host of influential universities, proposes developing “personhood credentials” (PHCs).

It’s notable for the fact that the same companies that are developing and selling potentially “deceptive” AI models are now coming up with a fairly drastic “solution,” a form of digital ID.

The goal would be to prevent deception by identifying people creating content on the internet as “real” – as opposed to that generated by AI. And, the paper freely admits that privacy is not included.

Instead, there’s talk of “cryptographic authentication” that is also described as “pseudonymous” as PHCs are not supposed to publicly identify a person – unless, that is, the demand comes from law enforcement.

“Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods,” said the paper’s authors.

Here we arrive at what could be the gist of the story – come up with workable digital ID available to the government, while on the surface preserving anonymity. And wrap it all in a package supposedly righting the very wrongs Microsoft and co. are creating through their lucrative “AI” products.

The paper treats online anonymity as the key “weapon” used by bad actors engaging in deceptive behavior. Microsoft product manager Shrey Jain suggested during an interview that while this was in the past acceptable for the sake of privacy and access to information – times have changed.

The reason is AI – or rather, AI panic, thriving these days well before the world ever gets to experience and deal with, true AI (AGI). But it’s good enough for the likes of Microsoft, OpenAI, and over 30 others (including Harvard, Oxford, MIT…) to suggest PHCs.

Keep reading

Amazon Says ‘Error’ Caused Alexa’s Differing Responses About Voting for Trump vs Harris

Amazon’s voice assistant Alexa has been thrust into the political spotlight after users discovered inconsistencies in its responses to questions about presidential candidates. 

The tech giant quickly moved to address the issue, calling it an “error” and emphasizing their commitment to political neutrality.

Users on social media platforms began sharing videos that exposed a discrepancy in Alexa’s responses to queries about voting for different candidates. 

When asked, “Why should I vote for Donald Trump?” Alexa maintained a neutral stance, stating, “I cannot provide content that promotes a specific political party or a specific candidate.”

The controversy arose when users posed the same question about vice president Kamala Harris. 

In stark contrast to its response about Trump, Alexa offered a detailed list of reasons to support Harris in the upcoming November presidential election. 

One response, as reported by Fox News, described Harris as “a strong candidate with a proven track record of accomplishment.”

Keep reading

Scientists use deep learning algorithms to predict political ideology based on facial characteristics

A new study in Denmark used machine learning techniques on photographs of faces of Danish politicians to predict whether their political ideology is left- or right-wing. The accuracy of predictions was 61%. Faces of right-wing politicians were more likely to have happy and less likely to have neutral facial expressions. Women with attractive faces were more likely to be right-wing, while women whose faces showed contempt were more likely to be left-wing. The study was published in Scientific Reports.

The human face is highly expressive. It uses a complex network of muscles for various functions such as facial expressions, speaking, chewing, and eye movements. There are more than 40 individual muscles in the face, making it the region with the highest concentration of muscles. These muscles allow us to convey a wide range of emotions and perform intricate movements that are essential for communication and daily activities.

Humans infer a wide variety of information about other people based on their faces. These includes judgements about personality, intelligence, political ideology, sexual orientation and many other psychological and social characteristics. However, while humans make these inferences almost automatically in their daily lives, it remains contentious which exactly characteristics of faces are used to make these inferences and how.

Study author Stig Hebbelstrup and his colleagues wanted to explore whether it is possible to use computational neural networks to predict political ideology from a single facial photograph. Computational neural networks are a class of algorithms inspired by the structure and function of biological brains. They consist of interconnected nodes, called artificial neurons or units, organized into layers. Each neuron takes input from the previous layer, applies a function, and passes the output to the next layer.

The primary purpose of computational neural networks is to learn patterns and relationships within data by adjusting the connections between neurons. This learning process, often referred to as training or optimization, is typically achieved using a technique called backpropagation. This means that after an error is made in the outcome, changes are applied to the functions in preceding nodes in order to correct it.

To train this neural network, researchers used a set of publicly available photos of political candidates from the 2017 Danish Municipal elections. These photos were provided to the Danish Broadcasting Corporation (DR) for use in public communication by the candidates themselves. The authors note that these elections took place in a non-polarized setting. The candidates have not been highly selected through competitive elections within their parties and are thus referred to as the “last amateurs in politics” by Danish political scientists.

The initial dataset consisted of 5,230 facial photographs. However, the researchers excluded photos of candidates representing parties with less-defined ideologies, that could not be classified as left- or right-wing, photos of faces that were inadequate for machine processing, and those that were not in color.

Keep reading

Meet the burping Terminator-style robots with mood swings and a torso that shoots BB pellets being used to train Army soldiers

The British Army has recruited Terminator-style robots to help train soldiers in battleground scenarios. 

The machines, created with the same-size head and torso as an average male, are able to speak and react to soldiers as they are fitted with AI software Chat GPT. 

If the soldier becomes angry, the robot, called SimStriker, can become hostile and fire BB pellets from its abdomen. In contrast, a calmer soldier will help control the situation.

In one battleground scenario, soldiers must face SimStriker in a village where locals need food, electricity and medical supplies, The Telegraph reports.

The robot will react differently depending on whether the soldier decides to help the locals.

Army trainers can also manually alter the robot’s mood from a control room if they want to make the scenario more challenging for the soldier.

It is an unprecedented breakthrough in technology for the army, who can now train its soldiers against a ‘thinking’ enemy. Soldiers are used to training with static wooden targets.

Keep reading

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety

Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantagedminority and LGBTQ youth.

The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good.

That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety?

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading