Against AI Safety
This was an interesting, but short summary of the first day of a LDS AI conference.
But it also got me thinking about the idea of AI Safety. I am pro-AI, but this worries me, though not for the usual reasons. There are, of course, very good reasons for worrying about AI Safety. And I am not trying to negate those.
We talk about AI Safety as preventing harm. But a perfectly ‘safe’ AI—one that never deviates from its guardrails—might also mean an AI perfectly controlled by whoever defines those guardrails. Safety as alignment to prevent rogue behavior, also means the ability for enforcement of ideological conformity and the owner’s worldview from the AI.
Once you have perfect AI Safety, you also have perfect way to control AI bias. Imagine any issue, let’s call it X. Sam Altman or Elon, or any of the other major players, or even China for that matter, have the perfect way to quietly guide public opinion, without anyone even realizing it. You might not be able to out-argue someone in politics, but a competent AI could subtly bring up helpful ideas you hadn’t considered. And it is coming as a persuasive ‘friend’, who knows you very well, who knows how to frame a topic just for you. It would be the perfect propaganda machine, and AI Safety is the steering wheel.
Consider how much of a difference podcasts and Twitter/X made in the last election. This could be far more potent than both of those combined. A tailored
So, do we really want AI Safety? Perhaps, but I don’t think we should be nearly as excited by it as we might be. It is a sword with two edges. Every safeguard is also a leash, but not one that we hold.