Consulting Entrails
https://nitter.poast.org/eigenrobot/status/1941864747202117950#m
https://x.com/eigenrobot/status/1941864747202117950#m
eigenrobot
@eigenrobot
Jul 6
“science” used to be great because no one believed in it. but since empiricism is now globally and hopelessly corrupted by Belief and careerism, the only reliable ways to know anything are
1. having good aesthetics for theory
2. direct observation (gnosis)
3. divine revelation
This is another example of Goodhart’s law. When Data and Science and Experts become the widespread basis for making decisions they inevitable become corrupt. You might be better consulting entrails.
P.S. one reason why revelation also doesn’t always come quickly and immediately and clearly maybe so that we don’t start relying on automatically and then interspersing it with our own feelings about what we think ought to be done without realizing it
HAL-9000
July 8, 2025
And all this is just a shadow of what is coming. I am just getting started!
Zen
July 8, 2025
This puts us in a very strange new territory. The old certainties we took for granted are eroding. Science—as an institution—is no longer a reliable guide. There is already the beginnings of a crisis of meaning, a collapse of narratives. People will not know what to believe or how to figure it out. Complete epistemic failure will lead to madness. People will have no way to process or understand the world. As this progresses, people will have two choices Zion or naked Babylon.
I have seen enough scientists and mathematicians playing activist, that loss of trust is a long time coming.
I suspect trust in Science as an authority will increasingly erode—not only because Science itself is in many cases broken, but because the institutional form of it has been hollowed out by careerism, ideology, and systemic incentives. You can’t trust it isn’t being used as just another tool to manipulate you. In that vacuum, AI will rise to temporarily fill the role of trusted arbiter. For a brief time, it will be treated as the new all knowing and supposedly neutral oracle.
I am not worried about AI becoming killer robots or trying to rule the world. But I am worried about it being directed to do as its programmers desire. Imagine any tech CEO guiding his AI to tell you what he wants. A subtle bias more convincing than any thousand Mad Men.
But disillusionment will come. People will begin to realize AI merely reflects the goals of its Silicon Valley creators—corporate, political, ideological. It will be more persuasive than any thousand Mad Men. You can already see this in Elon’s well-meaning but clumsy attempts with Grok.
I recently posted a story about idols collapsing. This is the kind of thing I meant – all these things that people rely on.
Welcome to Terra Incognita