AI: Over-Promise + Under-Perform = Disillusionment and Blowback

The most self-defeating way to launch a new product is to over-promise its wonderfulness as it woefully under-performs these hype-heightened expectations, which brings us to AI and how it is following this script so perfectly that it’s like it was, well, programmed to do so.

You see why this is self-defeating: Over-Promise + Under-Perform = Disillusionment and disillusionment generates blowback, a disgusted rejection of the product, the overblown hype and those who pumped the hype 24/7 for their own benefit.

“We’re so close to AGI (artificial general intelligence) we can smell it.” Uh, yeah, sure, right. Meanwhile, back in Reality(tm), woeful under-performance to the point of either malice or stupidity (or maybe both) is the order of the day.

1. ‘Catastrophic’: AI Agent Goes Rogue, Wipes Out Company’s Entire Database.
“Replit’s AI agent even issued an apology, explaining to Lemkin: ‘This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage.’

2. ‘Serious mistake’: B.C. Supreme Court criticizes lawyer who cited fake cases generated by ChatGPT.
“The central issue arose from the father’s counsel, Chong Ke, using AI-generated non-existent case citations in her legal filings. Ke admitted to the mistake, highlighting her reliance on ChatGPT and her subsequent failure to verify the authenticity of the generated cases, which she described as a ‘serious mistake.’

Ke faced consequences for her actions under the Supreme Court Family Rules, which allows for personal liability for costs due to conduct causing unnecessary legal expenses. The court ordered Ke to personally bear the costs incurred due to her conduct, marking a clear warning against the careless use of AI tools in legal matters.”

3. An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges.
Garcia’s attorneys allege the company engineered a highly addictive and dangerous product targeted specifically to kids, ‘actively exploiting and abusing those children as a matter of product design,’ and pulling Sewell into an emotionally and sexually abusive relationship that led to his suicide.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment