Against AI Safety
This was an interesting, but short summary of the first day of a LDS AI conference.
But it also got me thinking about the idea of AI Safety. I am pro-AI, but this worries me, though not for the usual reasons. There are, of course, very good reasons for worrying about AI Safety. And I am not trying to negate those.
We talk about AI Safety as preventing harm. But a perfectly ‘safe’ AI—one that never deviates from its guardrails—might also mean an AI perfectly controlled by whoever defines those guardrails. Safety as alignment to prevent rogue behavior, also means the ability for enforcement of ideological conformity and the owner’s worldview from the AI.
Once you have perfect AI Safety, you also have perfect way to control AI bias. Imagine any issue, let’s call it X. Sam Altman or Elon, or any of the other major players, or even China for that matter, have the perfect way to quietly guide public opinion, without anyone even realizing it. You might not be able to out-argue someone in politics, but a competent AI could subtly bring up helpful ideas you hadn’t considered. And it is coming as a persuasive ‘friend’, who knows you very well, who knows how to frame a topic just for you. It would be the perfect propaganda machine, and AI Safety is the steering wheel.
Consider how much of a difference podcasts and Twitter/X made in the last election. This could be far more potent than both of those combined. A tailored
So, do we really want AI Safety? Perhaps, but I don’t think we should be nearly as excited by it as we might be. It is a sword with two edges. Every safeguard is also a leash, but not one that we hold.
WJT
November 6, 2025
“AI” is bad, and “safety” is bad. Forgetting the scare-quotes when using the enemy’s terminology is also not so great.
Zen
November 6, 2025
May you live in “«’interesting’»” times.
WJT
November 6, 2025
A very “quotable” line.
Michael
November 6, 2025
Sounds like the safest thing is to not use AI.
[]
November 7, 2025
God gave us math equations that can simulate textbooks and conversations and they can be extremely useful in building the kingdom. I feel like the very best uses of neural nets are tangent to the safety discussion, except insofar as safety concerns hobble simulated intelligence generally, which is an intellect multiplier the same way bulldozers are muscle multipliers.
Mark Zuckerberg
November 9, 2025
WJT, time for bed, grandpa.
WJT
November 10, 2025
[], the difference is that a bulldozer really can do the kind of thing we do with our muscles, but fake intelligence can’t really do the kind of thing we do with our intellect.
Zen
November 10, 2025
When watching an approaching freight train, I don’t worry about how small it appears or how far it is away. I worry about the direction it is headed and how fast it is moving.
What is a seerstone or Liahona, but a tool as well?
S. Altman
November 10, 2025
Ultimate mostly unaccountable power? You can totally trust me.*
* As long as you are not Elon or an OpenAI Board member.
[]
November 10, 2025
WJT, a bulldozer can’t do most of the things we do with our muscles. A man with a shovel and time can do all of the same things a bulldozer does, in whatever quantity desired, with artistic flair to it if he desires. Bulldozers just shove things places. Automation has always, only, been able to replace the human touch with brute force, and we’ve refactored our workflows around that.
Intellectual automation has been going on for a very long time, at least as long as the abacus, and infiltrating the realms of chess, handwriting, and graphing complex equations more recently. This most recent wave of automation has already removed the burden from mankind of writing freshman essays about subjects, and is showing us that a lot of effort we thought was uniquely human was drudge work we accomplished with a different kind of protein mass. You can talk to a late-model chatbot and get very good Shakespeare analysis, which doesn’t show that it’s doing fake work or that the brain was always a computer, just that you can crunch symbols through an abacus as well as numbers.
I can accept that fake intelligence can’t do the kind of thing our minds do, or in the same way. It can get a lot of something similar done though, freeing our labor to consider the things it can’t.