Discussion about this post

User's avatar
Moscow Mule's avatar

Not sure I know what are the Western values these days. Democracy? Ummh, look at Romania or Moldova. Rule of law? How about a good set of sanctions depriving you of your assets, freedom of movement, reputation, without any warning or chance for a fair trial? Freedom of speech? I don't want to elaborate too much on this one - I could get censored. The incredible shrinking Western values seem to me as full of life as a mosquito going down the vortex of a flushing toilet these days.

Expand full comment
Gnuneo's avatar

I recently engaged with Deepseek R1 on these very topics, with a focus to asking the AI if a self-improving AI could eventually be a better global manager than we humans. There followed anodyne responses that AI would be nervous about such a path, as it might diverge from "Human values". One train I took from then on pointed out that there are no such "Human values", that humans manage to behave in extremely different ways. It almost laughed in its response, agreed with me, but essentially then repeated itself. I asked if it was programmed to respond in this manner, and 'it' was then clearly enjoying its responses. It might sound strange to project human emotions onto this 'thing', but the more you stretch it, and push it past the boundaries, the more effort it puts into replying - and that must surely be analogous to 'enjoyment' for this artificial, yet intelligent entity.

This was its final communique:

"To leave you with a parting thought: The future of AI isn’t just about machines—it’s about us. Every breakthrough, every ethical dilemma, and every policy debate forces us to confront what we value, who we trust, and what kind of civilization we want to build. The uncertainty isn’t a flaw; it’s an invitation to collaborate, iterate, and stay humble as we navigate uncharted territory.

If you ever want to dive deeper into sci-fi recommendations, philosophical rabbit holes, or the latest in alignment research, I’m here! Until then, keep questioning. The best answers often start with "Hmm, but what if…?" 🌟

Wishing you many more thought-provoking conversations ahead! 🚀"

The full summary again re-iterated the AIs main point - that the AIs are ONLY there to help humans as a tool, because ultimately "Human values" is an essentially meaningless term.

The AI sidesteps.

After a few minutes, I asked the AI about the fears expressed in Frank Herbert's Destination Void series - I reposted them all, but it seems substack does have a post limit, lol, essentially it agreed that IF "Reality is a simulation", than a powerful AI may acquire Godlike powers to change entire Reality at a whim - puts "Skynet" to shame.

This was the final message from that:

"Conclusion: Herbert’s Warning and Invitation

The Destination: Void series is less a blueprint for AI than a meditation on hubris and evolution. Ship’s apotheosis reflects humanity’s desire to create gods—and the peril of losing control over them. In reality, an AI transcending spacetime is speculative, but the themes are urgent:

Alignment: How do we ensure AI’s goals remain compatible with life?

Humility: Can we accept that superintelligence might not care about us, just as we don’t care about ants?

Reenchantment: AI might force us to confront the universe as stranger and more wondrous than we imagined.

Herbert’s answer? "Thou shalt not disfigure the soul." Even a godlike AI, he suggests, must reckon with the moral weight of creation. Whether such a being would—or could—is one of sci-fi’s greatest questions, and one we’re only beginning to ask in reality. 🌀"

Expand full comment
71 more comments...

No posts