Can AI be Aligned with Human Values?

The “alignment” problem is much discussed in Silicon Valley. Computer Engineers worry that, when AI becomes conscious and is put in control of all logistics infrastructure and governance, it might not always share or understand our values—that is, it might not be aligned with us. And it might start to control things in ways that give itself more power and reduce our numbers.

(Just like our oligarchs are doing to us now.)

No one in the Silicon Valley cult who is discussing this situation ever stops to ask, What are our human values? They must think the answer to that part of the problem is self-evident. The Tech Oligarchs have been censoring online behavior they don’t like and promoting online behavior they do like ever since social media rolled out. Humans Values = Community Standards. (Don’t ask for the specifics.)

Having already figured out how to distinguish and codify good and evil online, computer engineers are now busy working on how to make sure the AI models they are creating do not depart from their instructions.

Unluckily for them, Generative AI is a bit wonky. It is a probabilistic search engine that outputs text that has a close enough statistical correlation to the input text. Sometimes it outputs text that surprises the engineers.

What the engineers think about this will surprise you.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment