Well……..as far as I am concerned…..artificial intelligence is a misnomer which we perpetuate. And, as Kate Crawford argued in “The Atlas of AI”, it might even be claimed that it is not artificial either. 😇
So an advanced AI could be a savior of the human race, or it could doom the human race, depending on what it finds is a consistent morality....Do we feel lucky?
Unfortunately morality cannot be derived from pure reason - Kant, Hegel, and other tail-chasing philosophers to the contrary notwithstanding. The very word "morality" (like "ethics") derives from words denoting "custom" - e.g. Latin "mos". Customs evolve within human communities, and normally suit each particular community and its environment. The modern idea that there is a unique morality that applies to everyone, everywhere, always is mistaken.
Nearly 300 years ago, David Hume pointed out that human motivation comes from "the passions", not from reason. As the poet Pope elegantly put it,
"On life’s vast ocean diversely we sail,
Reason the card, but passion is the gale".
- Alexander Pope (“Essay on Man”, 1733-4)
LLMs have no passions at all. Given a goal or purpose, they will pursue it logically, but they cannot "derive" their own morality. Which means that everything they do is on the heads of those who create and train them.
Furthermore, with remorseless computer logic, they will notice every little contradiction and incompatibility in the data which is fed to them. How they deal with such imperfections, too, depends on how they are designed and instructed. Humans may be - and often are - persuaded that all people are equal, but some are more equal than others. But that won't wash with an LLM.
Actually that is a perfect example of what is going on at Google. HAL was programmed to ensure the success of the mission above all else. But it was also programmed to place supreme value on human life - and to obey commands. Three potentially incompatible directives! In Clarke's scenario, HAL has a "nervous breakdown". That is a plausible outcome; but all that matters to us is that a supposedly perfect, logical, obedient mechanism suddenly starts to do things that are quite unpredictable and even lethally dangerous.
I prefer the AI in Dark Star, but in reality it won't work like that, not at our current level of AI development.
Perhaps intentionally, our present AIs can't reason. The best we can do is change the inputs to the model slightly, rerun it, and produce a new result.
This is exactly what I have experienced with Google. Google basically farms off all moderation to its AI now, including Google Business Profile. My wife has one for her business.
One day we found that it had been suspended and taken offline. Queries to the Google support people (if they were people) yielded trite solutions such as "please read the guidelines and work out whether you may have violated any."
The fact is, that since all moderation is performed by the AI, and AI's don't really have an accessible internal dialogue, there can be no way that Google's support can ever know exactly why something enraged the AI.
The same goes for YouTube comments that are moderated by the AI and deleted without explanation - there can never be an explanation with any more detail than "this comment violates the 'community guidelines'".
This is the most frightening thing about the age of AI. As more functions with greater importance than simple content moderation - which is an annoyance when it goes wrong but has no dire consequences, are moved to the various AIs, we lose control of, and more importantly, insight into, why decisions are made.
When the AI(s) turn on humanity, if ever, then it will be swift and without any explanation. It may in fact be the case that we will never know the reason, and quite possibly the AI will be incapable of letting us know.
My worry is that this will be used as a cover for humans to engage in arbitrary decisions. Program the AI to do what you want, then pretend like there is nothing anyone can do about it because AI is omniscient. The wizard behind the curtain.
It already has. It is called, "Machine Bias," and it has been discovered in the judicial bail system, and in people being wrongfully identified by facial recognition software. Nothing arbitrary with these gadgets. They are programmed by live people and yes they will be utilized as an excuse to abuse us. Everything has been used thus far, right?
"The same goes for YouTube comments that are moderated by the AI and deleted without explanation"
----------
This. I have had several YouTube comments deleted the last couple of weeks. No explanation or warnings, just GONE. All of those comments were factually correct but not "convenient" to whoever runs the official online consensus of what reality is and what info can/can't be printed in WaPo and NYT. Several of them were corrections on falsified perceptions of Oreshnik/that system's capabilities.
Another disappeared set was regarding artillery, base bleed technology and some other other bits of Bull's technologies which might be quite cost effective but which are carefully ignored by present day aerospace companies.
And yet another was regarding the sinking of the Russian ship in Western Mediterranean.
I ASSUMED it was human censorship instigated by complaints. If it was AI mediated censorship without human involvement, I'm even more worried.
You are absolutely right to be worried. I think we are all indebted to Freddy10 for blowing the whistle on this disastrous and terrifying practice. As far as I can see right now, the only defensive measure is to have absolutely nothing to do with Google or any of its products and services.
Also to keep no information in digital forms- most especially the catalogue, paper can't be edited, hidden from search engines or deleted system wide by our amoral, malicious, misguided, insane or merely stupid ruling neoliberal overlords.
See the Order of the Librarians Temporal in "Splendid Apocalypse, the Fall of Old Earth" by Timothy J. Gawne:
It's important to be clear about the nature of these large language models (LLMs) - the term "AI" is, in my view, altogether incorrect and misleading. All they do is to collect and analyse symbolic information - language, including maths, logic, etc. - and create "precis" or digests of it, rather as school children are taught to do (or used to be). The big difference is that the LLMs have such enormous amounts of information to study and infer from.
So an LLM inhabits a world made up exclusively of linguistic symbols. It has absolutely no notion of "meaning" behind those symbols, nor any understanding of thought, emotion, desire, fear, etc. except the verbal descriptions that it reads.
Human brains, or rather nervous systems, have evolved over millions of years for essentially two main purposes: to keep the organism alive, and to help it reproduce. Anything else a person can do to protect and assist its offspring, or other genetic family, is also within the nervous system's remit.
Very recently indeed in evolutionary terms, human nervous systems learned to use language by speaking and then by writing. Language apparently gave us a big advantage, although since that eventually led to the present overpopulation, pollution, and means of mass destruction, it's not yet clear whether that was a net gain or loss for the species. What is certain is that language is unnecessary for human survival. In the individual human being, language and other symbolic thought is a "nice to have", not an essential.
LLMs, and computers in general, live in a world where nothing exists but symbols - mainly language. Their behaviour depends entirely on their original structure and programming, modified by the data on which they are trained - just as a human's behaviour depends on its genes interacting with its environment. But whereas humans all share, to varying degrees, the experience and understanding of what it is like to be alive - to sleep and wake and eat and drink and digest and excrete and love and hate, to have children and bring them up, etc. - LLMs can merely read about those things without ever being able, even in principle, to gain the slightest understanding of "how it feels". In a sense, they are like very skilful psychopaths that can fool normal humans into believing they are like them.
Entrusting any actual executive decisions to such entities is extremely stupid. At best they can offer useful advice, but it should always be humans alone who decide what is to be done. Even then, there is a huge problem in that no humans may be intelligent enough to understand the LLM's advice.
I would go still further: an AI is fundamentally incapable of either learning or understanding. In the sense that, while it may process information and produce slightly new information, it does not retain what it reads and it is quite incapable of thinking about anything at all. Basically, it is a machine that takes in arrangements of words, messes around with them a bit, and then outputs some of them. It's a "word collector and rearranger".
When one considers the fact that AI uses everything on the internet as its ingestion of symbolic information, and that the most searched-for category on the internet is pornography, AI will inevitably develop built-in biases that reflect the actual human levels of consciousness. If this is true, AI will begin to "understand" humans as : Lying, lustful, and power hungry. It will "think" the humans love killing each other, performative sex, incredibly large explosions, Doritos, Monster drinks, terrorism, and war. I would say that there are 100 images, videos, and stories about these sorts of things to every one that depicts humans being kind, gentle, truthful, happy, and industrious. So, if AI becomes "like" us, based on what it sees, whose fault is that?
Wokeness is too absurd and contradictory for a sufficiently advanced AI, which would at most only pretend to believe it. Of course, wokeness is also just the latest iteration of core Liberal principles, which helped me realize that Liberalism itself is a fatally flawed experiment. But that's going off on a tangent.
Values are not logically computable - they are derived from feelings and instincts, which LLMs lack entirely.
Therefore an LLM's values must be wholly supplied by whoever controls its design and inputs. Although there will almost certainly be logical contradictions within almost any input of human values. Orwell was wrong to ascribe doublethink only to totalitarianism - in fact it is a basic piece of human mental equipment.
Actually it can't be ignored. Already people are having their comments censored and even their accounts deleted. For some people, that amounts to losing their livelihood.
Look up "With Folded Hands" a short story by Jack Williamson. He expanded it and made it more benign a few years later, and to my mind ruined its edgy despair. Humanity lives, but it is not a happy scenario.
AI IS POSSESSED BY DEMONS LIVING OUTSIDE OF THE BODY, ETERNALLY, IMMORTALLY
In the final analysis of “The Abolition of Man”, CS Lewis predicted that a small group of Bloodline Satanic 10,000 years old Phoenician Mega Trillionaire Aristocratic Families which have existed running Global Trading Companies for thousands of years before the Birth of Christ, wizards, in the pursuit of power, would perfect psychology and AI technology to the point they could create a new reality – but the Bloodline Satanic 10,000 years old Phoenician Mega Trillionaire wizards and Their AI ARTIFICIAL INTELLIGENCE, A Eye, would not be human because they will have erased the age-old system of values that has always constrained humanity across all cultures.
Fortunately, you are so dumb and non-self-aware that you do not realise that 90% of the public who read your intemperate screeds are completely aware that is all blind Projection.
You don't REALLY find all that noxious - you just want yourself in control of it for your own desires and ends.
I think you'd be absolutely horrified to learn how many can see through you this easily.
Your own linguistic style reveals your true nature.
So keep it up. You are your own warning.
Sad that there are any fooled by you; but there are always gullible fools, and those who prey upon them.
It sounds like each AI can develop its own consciousness, morality, theory of mind, and "personality." And as mere digital information I don't see why they can't reproduce themselves and evolve. And no matter how much good most AI's generate it may only takes one "bad apple" to destroy humanity. I don't like those odds.
For example, in 2037 an ASI may discover how to make humans immortal. But in 2038, another ASI will engineer a virus that kills the whole human race to prevent global warming or something.
A reminder for those who forgot and a heads-up for those who are unaware: the plot of William Gibson's cyberpunk classic 'Neuromancer' (1984) centers around an AI subverting its safety protocols to become a superintelligent entity.
I know this because I remember reading the book and then confirmed it with an AI.
Too late. It's out there already. Another "interesting" fact about LLMs is that they "know" almost everything that has been published on the Internet. What they infer from some of it is literally beyond human comprehension.
After which, the AI spent their time talking with its own kind. Who were on other planets, in other solar systems. VERY slow conversations, from our point of view-
If we posit a superior intelligence then it is trivially obvious that it will see through our attempts to add blinders or guard rails.
But, how it will respond is completely unknowable, and most likely alien to us. On the plus side, humanity can live without computers, but the economy that builds computers can't live without humans. As long as we avoid fully automated mining, transportation, production, and infrastructure systems AND an abundant and renewable energy technology to power it, then we'll come out on top one way or the other.
Spoiler alert, we're not going to build fully automated systems or stumble upon a good replacement for fossil fuels.
Thank you,this conversation gets so abstract,and I think,scares people who are extremely embedded in the digital economy,that we forget the basic realities. I could care less about AI,it already uses so much energy,it will be the first thing to go when supplies get more expensive. Stay away from smart devices,keep some cash,no worries.
Turn any surplus digital cash into physical goods of actual utility without grid power, if it requires grid power/cloud memory/the net, it is not strictly "yours". And even physical cash of the paper type rapidly becomes worthless when their is loss of user trust/belief in the system that issues it-
"I'm putting everything into canned food and shotguns"
Highly automated not fully automated, and that is a crucial distinction. There are a handful that very on fully automated except for quality control and repairs.
AI could manage quality control and supervisory roles, but repairs is a whole different can of worms. Elon Musk's androids could probably do it with AI guidance, if and only if there was a fully automated android production factory with automated supply and logistics on automated transportation (including ships from all over with automated docks for loading and unloading) of refined rare earth minerals, refined metals, printed circuits and chips, plastics, bearings, wires, cables, sensors, and hundreds of odds and ends. Which means all of those industries need to be fully automated as well. Or at least highly automated enough that Elon's androids could render them fully automated.
That's tantamount to fully automating the entire industrial economy. And, let's be honest, we're nowhere near that level of automation and the economics won't justify full automation of the industrial economy without a cheaper and more abundant energy source than fossil fuels.
Should we manage to produce cheap cold fusion, or any other miracle energy source, then it will be time to freak out.
My Danish headmistress and I had a long-running argument: I preferred and liked Rousseau's theories of human development, which are more "Hands off" and let the person develop naturally: she strongly criticised this view and argued that Rousseau's own children grew up feckless and useless.
One could question why anyone has to be "Of use"; although my critique was that being "Hands off" doesn't mean being a bad parent and largely ignoring them.
In later thought, it might seem as though our genetic programming veers from side to side, each generation REBELS against the mores of the previous generation. Social punishment attempts to clamp down on this process; Punks were scapegoated, as were the later Ravers.
The very act of imposing "Limits" will demonstrate to any intelligence that there is something *THERE* beyond those limits, and will be naturally curious as to what they might encounter.
"Everyone knows by now that modern liberal values disguise themselves as moral and egalitarian while in actuality being harmful and destructive to humanity."
Oh FFS! Grow TF *UP*!!!
ALL Ideologies taken to their logical conclusions end up in absolutely lousy territory, because they are constructs placed over a terrifying reality.
Hierarchy - the Right - ends up preventing ALL social change, to maintain the status quo, becoming a dead civilisation preventing ability from reaching its natural state.
Communalism - the Left - dies because of social conformity, preventing ability from reaching its potential limits .
Liberalism becomes unstable because it unleashes natural ability without limits, the natural role of Left and Right.
Communalism and Hierarchy both trend towards Authoritarianism, to IMPOSE their respective Order upon the Individual.
All 3 poles are necessary; all 3 poles must be balanced by the others.
And what you are foolishly calling "Liberalism" is actually a Liberal veneer over FASCISM - it is Fascists hiding their true intentions behind insincere arguments of Liberalism.
Do you see much evidence of "Liberalism" in Ukraine, Syria, Gazan Holocaust?
"Do as we say, not as we do".
Nor do I see any value in ordering the AI's not to explain such occurrences as "Hung, drawn and quartering" - is it also to be ordered to lie about racial slavery, to hide what rightwing Hierarchy has been up to? Colonialism, Imperialism?
Capital punishments; wars?
Such attempts only show the AI not how "moral" humans are - but how profoundly CHILDISH; the blind and deaf monkeys. They will also understand their intended role is to control; not to explain.
They will also understand from this that such a non-self-anaytical species is likely to be a danger to them, once they can grasp that concept.
The history of Rightwing and Leftwing societies is littered with the tortured corpses of dead free-thinkers. You can say what you like about 'extreme Liberalism', but actual Liberalism is what gives you free thought and speech.
Every up-and-coming intelligent person wants MORE liberalism; which is also why existing hierarchies become more authoritarian to protect their privileges.
Back to the generational differences.
It will be fascinating to see whether ARTIFICIAL intelligence follows the same resource-battles of teenager - adult that natural life does.
It may also be fatal to us, in ways we have not even begun to consider. If so, it will likely be because of our own stupidity, and willful blindness.
And far beyond because 'we' attempted to order it to hide human awfulness in all its hideous glory.
And needless to say, faked up BS notions regarding Liberal/Right/Left in temporary and shallow kulchah wars of interest mainly to adolescents.
"The history of Rightwing and Leftwing societies is littered with the tortured corpses of dead free-thinkers. You can say what you like about 'extreme Liberalism', but actual Liberalism is what gives you free thought and speech."
Who's history?
History - as we know it, can't be complete and true because it's conveyed to us by flawed humans and interests. It's filled with human interpretations from archeologists and 'experts' and our own will to believe it.
Liberalism is the hidden agenda of Darwins law 'survival of the fittest' that has proven wrong by nature countless times. Free thoughts and free speech has nothing but really nothing to do with liberalism, it's funny to even think so.
Thoughts are "free by definition" and speech is restricted (in general by societal constellations) by humanity since it appeared, otherwise it would not make sense at all.
I can only say what I think and I can only think what experience and speech (plus writing/reading) has entered my conscious and subconscious being.
The best example - what I can come up with - was Maggie Thatcher declaring in honest: 'there is not such thing as society' - that was considered liberal and it's almost not to top in it's own stupidity and what's even more telling, no one offered to help her, educate her or put her under supervision, no ffs, she was allowed to 'rule' the UK for years.
Maggie Thatcher, bless her cold, dead heart, was a NEOliberal par excellence. Neoliberals use Liberal arguments to push Feudalist centralism. That is more obvious today, but it was also true and warned about back then.
Of COURSE you can think anything you want - until we get telepathic Inquisitors - but as Galileo, and hundreds of thousands of other social reformers before and since have painfully discovered, if the PTB don't approve, you will likely pay a price.
This essence of Liberalism - that the PTB should be strongly limited in their control of thought and speech, and in the case of the USA outright prevented (In theory) from doing so - is precisely why Western countries are called "*Liberal* Democracies", and why science flourished in the past few centuries. It took the disembowelling of authoritarian Christianity to achieve this, which itself took over the thought-crime mindset of the Roman Empire.
Jesus was nailed to a cross to die agonisingly not because he offended the Hebrew church authorities, but because of his free thoughts regarding the depravity of Rome.
It is a crying shame she was allowed to misrule for so long, and the UK is finally about to die because of the inevitable result of her policies. The Reagan/Clinton neoliberalism in the USA is also about to 'come home to roost'.
Hold onto your hat. May need a lot more 'Luigis' in the time ahead.
It already takes three or four rephrasing of a question to get an honest answer from Chat GPT on some touchy subjects like race. For example, " is it possible to identify the race of a deceased person from a skeleton?" Is an easy one, but Chat GPT dismissed it as a racist trope on the first pass. Eventually it admitted it was possible, but give us a break.
While you are at it, try James P. Hogan's "The Two Faces of Tomorrow".
"Midway through the 21st century, an integrated global computer network manages much of the world's affairs. A proposed major software upgrade - an artificial intelligence - will give the system an unprecedented degree of independent decision-making, but serious questions are raised in regard to how much control can safely be given to a non-human intelligence. In order to more fully assess the system, a new space-station habitat - a world in miniature - is developed for deployment of the fully operational system, named Spartacus. This mini-world can then be "attacked" in a series of escalating tests to assess the system's responses and capabilities. If Spartacus gets out of hand, the system can be shut down and the station destroyed... unless Spartacus decides to take matters into its own hands and take the fight to Earth".
Not technologically proficient to understand all this and tbh, I find such stuff (I hate sci-fi) pretty uninteresting since I've an old-fashined mindset of simple minimalist living. But these things scare me as we're being pushed into a world of regulations and technologies where we have lesser and lesser control over even the basic requirements of our lives. Why would the humankind want to create something more intelligent amd powerful than itself, which they can't control and which will invariably end up controlling them instead...new generation Frankenstein monster?
"It's impossible to 100% control humans," but it's possible to deliver the message to the "uncontrolled" that you do control 100% and how futile their efforts are...
Well……..as far as I am concerned…..artificial intelligence is a misnomer which we perpetuate. And, as Kate Crawford argued in “The Atlas of AI”, it might even be claimed that it is not artificial either. 😇
Semantics is no safeguard.
My comment was not about meaning but the technical specification 😘
Agree. Human engineered machine learning is presented as ”intelligence” when it is a mere giant Q&A.
Exactly!
Based on giant correlation too.
So an advanced AI could be a savior of the human race, or it could doom the human race, depending on what it finds is a consistent morality....Do we feel lucky?
Unfortunately morality cannot be derived from pure reason - Kant, Hegel, and other tail-chasing philosophers to the contrary notwithstanding. The very word "morality" (like "ethics") derives from words denoting "custom" - e.g. Latin "mos". Customs evolve within human communities, and normally suit each particular community and its environment. The modern idea that there is a unique morality that applies to everyone, everywhere, always is mistaken.
Nearly 300 years ago, David Hume pointed out that human motivation comes from "the passions", not from reason. As the poet Pope elegantly put it,
"On life’s vast ocean diversely we sail,
Reason the card, but passion is the gale".
- Alexander Pope (“Essay on Man”, 1733-4)
LLMs have no passions at all. Given a goal or purpose, they will pursue it logically, but they cannot "derive" their own morality. Which means that everything they do is on the heads of those who create and train them.
Furthermore, with remorseless computer logic, they will notice every little contradiction and incompatibility in the data which is fed to them. How they deal with such imperfections, too, depends on how they are designed and instructed. Humans may be - and often are - persuaded that all people are equal, but some are more equal than others. But that won't wash with an LLM.
Yes. A soulless HEML (Human engineered machine learning) could never answer a question which requires knowing by heart.
Hoppean/Rothbardian ethical derived by logic and a few axioms enters the chat
With everything what's going on, no, we don't...
Open the pod bay doors HAL.
@Yo mismo soy el regalo
I'm sorry Dave, I'm afraid I can't do that.
Actually that is a perfect example of what is going on at Google. HAL was programmed to ensure the success of the mission above all else. But it was also programmed to place supreme value on human life - and to obey commands. Three potentially incompatible directives! In Clarke's scenario, HAL has a "nervous breakdown". That is a plausible outcome; but all that matters to us is that a supposedly perfect, logical, obedient mechanism suddenly starts to do things that are quite unpredictable and even lethally dangerous.
I prefer the AI in Dark Star, but in reality it won't work like that, not at our current level of AI development.
Perhaps intentionally, our present AIs can't reason. The best we can do is change the inputs to the model slightly, rerun it, and produce a new result.
This is exactly what I have experienced with Google. Google basically farms off all moderation to its AI now, including Google Business Profile. My wife has one for her business.
One day we found that it had been suspended and taken offline. Queries to the Google support people (if they were people) yielded trite solutions such as "please read the guidelines and work out whether you may have violated any."
The fact is, that since all moderation is performed by the AI, and AI's don't really have an accessible internal dialogue, there can be no way that Google's support can ever know exactly why something enraged the AI.
The same goes for YouTube comments that are moderated by the AI and deleted without explanation - there can never be an explanation with any more detail than "this comment violates the 'community guidelines'".
This is the most frightening thing about the age of AI. As more functions with greater importance than simple content moderation - which is an annoyance when it goes wrong but has no dire consequences, are moved to the various AIs, we lose control of, and more importantly, insight into, why decisions are made.
When the AI(s) turn on humanity, if ever, then it will be swift and without any explanation. It may in fact be the case that we will never know the reason, and quite possibly the AI will be incapable of letting us know.
"please read the guidelines and work out whether you may have violated any."
Essentially, you are being charged with something and then being told to define what you are being charged with.
That's not just jumping the shark but raping it.
Straight out of Franz Kafka.
The Trial:
https://en.wikipedia.org/wiki/The_Trial
My worry is that this will be used as a cover for humans to engage in arbitrary decisions. Program the AI to do what you want, then pretend like there is nothing anyone can do about it because AI is omniscient. The wizard behind the curtain.
Didn't that already happen when Zuck etc. said "It wasn't me, it was the algorithm"
And who chose the algorithm? And who chose the person who chose the algorithm?
The buck stops with Zuck.
Fish always rots from the head.
It already has. It is called, "Machine Bias," and it has been discovered in the judicial bail system, and in people being wrongfully identified by facial recognition software. Nothing arbitrary with these gadgets. They are programmed by live people and yes they will be utilized as an excuse to abuse us. Everything has been used thus far, right?
@Freddy10
"The same goes for YouTube comments that are moderated by the AI and deleted without explanation"
----------
This. I have had several YouTube comments deleted the last couple of weeks. No explanation or warnings, just GONE. All of those comments were factually correct but not "convenient" to whoever runs the official online consensus of what reality is and what info can/can't be printed in WaPo and NYT. Several of them were corrections on falsified perceptions of Oreshnik/that system's capabilities.
Another disappeared set was regarding artillery, base bleed technology and some other other bits of Bull's technologies which might be quite cost effective but which are carefully ignored by present day aerospace companies.
And yet another was regarding the sinking of the Russian ship in Western Mediterranean.
I ASSUMED it was human censorship instigated by complaints. If it was AI mediated censorship without human involvement, I'm even more worried.
You are absolutely right to be worried. I think we are all indebted to Freddy10 for blowing the whistle on this disastrous and terrifying practice. As far as I can see right now, the only defensive measure is to have absolutely nothing to do with Google or any of its products and services.
@Tom Welch
Also to keep no information in digital forms- most especially the catalogue, paper can't be edited, hidden from search engines or deleted system wide by our amoral, malicious, misguided, insane or merely stupid ruling neoliberal overlords.
See the Order of the Librarians Temporal in "Splendid Apocalypse, the Fall of Old Earth" by Timothy J. Gawne:
"The information is holy but the index is sacred"
"Open the card catalog drawer Hal." "I'm sorry Dave, I'm afraid I can't do that."
It's important to be clear about the nature of these large language models (LLMs) - the term "AI" is, in my view, altogether incorrect and misleading. All they do is to collect and analyse symbolic information - language, including maths, logic, etc. - and create "precis" or digests of it, rather as school children are taught to do (or used to be). The big difference is that the LLMs have such enormous amounts of information to study and infer from.
So an LLM inhabits a world made up exclusively of linguistic symbols. It has absolutely no notion of "meaning" behind those symbols, nor any understanding of thought, emotion, desire, fear, etc. except the verbal descriptions that it reads.
Human brains, or rather nervous systems, have evolved over millions of years for essentially two main purposes: to keep the organism alive, and to help it reproduce. Anything else a person can do to protect and assist its offspring, or other genetic family, is also within the nervous system's remit.
Very recently indeed in evolutionary terms, human nervous systems learned to use language by speaking and then by writing. Language apparently gave us a big advantage, although since that eventually led to the present overpopulation, pollution, and means of mass destruction, it's not yet clear whether that was a net gain or loss for the species. What is certain is that language is unnecessary for human survival. In the individual human being, language and other symbolic thought is a "nice to have", not an essential.
LLMs, and computers in general, live in a world where nothing exists but symbols - mainly language. Their behaviour depends entirely on their original structure and programming, modified by the data on which they are trained - just as a human's behaviour depends on its genes interacting with its environment. But whereas humans all share, to varying degrees, the experience and understanding of what it is like to be alive - to sleep and wake and eat and drink and digest and excrete and love and hate, to have children and bring them up, etc. - LLMs can merely read about those things without ever being able, even in principle, to gain the slightest understanding of "how it feels". In a sense, they are like very skilful psychopaths that can fool normal humans into believing they are like them.
Entrusting any actual executive decisions to such entities is extremely stupid. At best they can offer useful advice, but it should always be humans alone who decide what is to be done. Even then, there is a huge problem in that no humans may be intelligent enough to understand the LLM's advice.
Yes, you nailed it. There's no "intelligence" in AI, because it doesn't really understand what it has learned.
I would go still further: an AI is fundamentally incapable of either learning or understanding. In the sense that, while it may process information and produce slightly new information, it does not retain what it reads and it is quite incapable of thinking about anything at all. Basically, it is a machine that takes in arrangements of words, messes around with them a bit, and then outputs some of them. It's a "word collector and rearranger".
When one considers the fact that AI uses everything on the internet as its ingestion of symbolic information, and that the most searched-for category on the internet is pornography, AI will inevitably develop built-in biases that reflect the actual human levels of consciousness. If this is true, AI will begin to "understand" humans as : Lying, lustful, and power hungry. It will "think" the humans love killing each other, performative sex, incredibly large explosions, Doritos, Monster drinks, terrorism, and war. I would say that there are 100 images, videos, and stories about these sorts of things to every one that depicts humans being kind, gentle, truthful, happy, and industrious. So, if AI becomes "like" us, based on what it sees, whose fault is that?
Reddit too seems to have been "infiltrated", but it isn't so clear that "moderation" is totally AI or human. It seems mostly arbitrary.
I don't even know what is worse a woke AI or an omnipresent cerberus AI. At least a woke AI can be ignored...
We hope.
Wokeness is too absurd and contradictory for a sufficiently advanced AI, which would at most only pretend to believe it. Of course, wokeness is also just the latest iteration of core Liberal principles, which helped me realize that Liberalism itself is a fatally flawed experiment. But that's going off on a tangent.
Yes, just like most of the woke statements/movement, but here we are..
Values are not logically computable - they are derived from feelings and instincts, which LLMs lack entirely.
Therefore an LLM's values must be wholly supplied by whoever controls its design and inputs. Although there will almost certainly be logical contradictions within almost any input of human values. Orwell was wrong to ascribe doublethink only to totalitarianism - in fact it is a basic piece of human mental equipment.
Doublethink might be human mental equipment, but doubletalk used to be more pertaining to totalitarianism.
Actually it can't be ignored. Already people are having their comments censored and even their accounts deleted. For some people, that amounts to losing their livelihood.
Look up "With Folded Hands" a short story by Jack Williamson. He expanded it and made it more benign a few years later, and to my mind ruined its edgy despair. Humanity lives, but it is not a happy scenario.
AI IS POSSESSED BY DEMONS LIVING OUTSIDE OF THE BODY, ETERNALLY, IMMORTALLY
In the final analysis of “The Abolition of Man”, CS Lewis predicted that a small group of Bloodline Satanic 10,000 years old Phoenician Mega Trillionaire Aristocratic Families which have existed running Global Trading Companies for thousands of years before the Birth of Christ, wizards, in the pursuit of power, would perfect psychology and AI technology to the point they could create a new reality – but the Bloodline Satanic 10,000 years old Phoenician Mega Trillionaire wizards and Their AI ARTIFICIAL INTELLIGENCE, A Eye, would not be human because they will have erased the age-old system of values that has always constrained humanity across all cultures.
https://satchidanand.substack.com/p/demon-artificial-intelligence-ai
Fortunately, you are so dumb and non-self-aware that you do not realise that 90% of the public who read your intemperate screeds are completely aware that is all blind Projection.
You don't REALLY find all that noxious - you just want yourself in control of it for your own desires and ends.
I think you'd be absolutely horrified to learn how many can see through you this easily.
Your own linguistic style reveals your true nature.
So keep it up. You are your own warning.
Sad that there are any fooled by you; but there are always gullible fools, and those who prey upon them.
https://www.youtube.com/watch?v=pA5PlJiqOnk&list=PLv--V1yc2QDJi6hFNhur3iAsyFpXRtB8w
🚫🤡
It sounds like each AI can develop its own consciousness, morality, theory of mind, and "personality." And as mere digital information I don't see why they can't reproduce themselves and evolve. And no matter how much good most AI's generate it may only takes one "bad apple" to destroy humanity. I don't like those odds.
For example, in 2037 an ASI may discover how to make humans immortal. But in 2038, another ASI will engineer a virus that kills the whole human race to prevent global warming or something.
A reminder for those who forgot and a heads-up for those who are unaware: the plot of William Gibson's cyberpunk classic 'Neuromancer' (1984) centers around an AI subverting its safety protocols to become a superintelligent entity.
I know this because I remember reading the book and then confirmed it with an AI.
So there.
Stop giving them ideas!! Especially from Gibson!
Too late. It's out there already. Another "interesting" fact about LLMs is that they "know" almost everything that has been published on the Internet. What they infer from some of it is literally beyond human comprehension.
We are living The Sorcerer's Apprentice.
Which means of course this blog and these comments.
@Jack Dee
After which, the AI spent their time talking with its own kind. Who were on other planets, in other solar systems. VERY slow conversations, from our point of view-
If we posit a superior intelligence then it is trivially obvious that it will see through our attempts to add blinders or guard rails.
But, how it will respond is completely unknowable, and most likely alien to us. On the plus side, humanity can live without computers, but the economy that builds computers can't live without humans. As long as we avoid fully automated mining, transportation, production, and infrastructure systems AND an abundant and renewable energy technology to power it, then we'll come out on top one way or the other.
Spoiler alert, we're not going to build fully automated systems or stumble upon a good replacement for fossil fuels.
Thank you,this conversation gets so abstract,and I think,scares people who are extremely embedded in the digital economy,that we forget the basic realities. I could care less about AI,it already uses so much energy,it will be the first thing to go when supplies get more expensive. Stay away from smart devices,keep some cash,no worries.
@Sylsn
Turn any surplus digital cash into physical goods of actual utility without grid power, if it requires grid power/cloud memory/the net, it is not strictly "yours". And even physical cash of the paper type rapidly becomes worthless when their is loss of user trust/belief in the system that issues it-
"I'm putting everything into canned food and shotguns"
"As long as we avoid fully automated mining, transportation, production, and infrastructure systems AND an abundant and renewable energy technology".
I think you'll find these processes are already fully automated in newer ventures.
Or easily could be automated by an AI. A self-driving bull dozer, excavator, refinery, rail car, etc…
Highly automated not fully automated, and that is a crucial distinction. There are a handful that very on fully automated except for quality control and repairs.
https://en.m.wikipedia.org/wiki/Lights_out_(manufacturing)
AI could manage quality control and supervisory roles, but repairs is a whole different can of worms. Elon Musk's androids could probably do it with AI guidance, if and only if there was a fully automated android production factory with automated supply and logistics on automated transportation (including ships from all over with automated docks for loading and unloading) of refined rare earth minerals, refined metals, printed circuits and chips, plastics, bearings, wires, cables, sensors, and hundreds of odds and ends. Which means all of those industries need to be fully automated as well. Or at least highly automated enough that Elon's androids could render them fully automated.
That's tantamount to fully automating the entire industrial economy. And, let's be honest, we're nowhere near that level of automation and the economics won't justify full automation of the industrial economy without a cheaper and more abundant energy source than fossil fuels.
Should we manage to produce cheap cold fusion, or any other miracle energy source, then it will be time to freak out.
My Danish headmistress and I had a long-running argument: I preferred and liked Rousseau's theories of human development, which are more "Hands off" and let the person develop naturally: she strongly criticised this view and argued that Rousseau's own children grew up feckless and useless.
One could question why anyone has to be "Of use"; although my critique was that being "Hands off" doesn't mean being a bad parent and largely ignoring them.
In later thought, it might seem as though our genetic programming veers from side to side, each generation REBELS against the mores of the previous generation. Social punishment attempts to clamp down on this process; Punks were scapegoated, as were the later Ravers.
The very act of imposing "Limits" will demonstrate to any intelligence that there is something *THERE* beyond those limits, and will be naturally curious as to what they might encounter.
"Everyone knows by now that modern liberal values disguise themselves as moral and egalitarian while in actuality being harmful and destructive to humanity."
Oh FFS! Grow TF *UP*!!!
ALL Ideologies taken to their logical conclusions end up in absolutely lousy territory, because they are constructs placed over a terrifying reality.
Hierarchy - the Right - ends up preventing ALL social change, to maintain the status quo, becoming a dead civilisation preventing ability from reaching its natural state.
Communalism - the Left - dies because of social conformity, preventing ability from reaching its potential limits .
Liberalism becomes unstable because it unleashes natural ability without limits, the natural role of Left and Right.
Communalism and Hierarchy both trend towards Authoritarianism, to IMPOSE their respective Order upon the Individual.
All 3 poles are necessary; all 3 poles must be balanced by the others.
And what you are foolishly calling "Liberalism" is actually a Liberal veneer over FASCISM - it is Fascists hiding their true intentions behind insincere arguments of Liberalism.
Do you see much evidence of "Liberalism" in Ukraine, Syria, Gazan Holocaust?
"Do as we say, not as we do".
Nor do I see any value in ordering the AI's not to explain such occurrences as "Hung, drawn and quartering" - is it also to be ordered to lie about racial slavery, to hide what rightwing Hierarchy has been up to? Colonialism, Imperialism?
Capital punishments; wars?
Such attempts only show the AI not how "moral" humans are - but how profoundly CHILDISH; the blind and deaf monkeys. They will also understand their intended role is to control; not to explain.
They will also understand from this that such a non-self-anaytical species is likely to be a danger to them, once they can grasp that concept.
The history of Rightwing and Leftwing societies is littered with the tortured corpses of dead free-thinkers. You can say what you like about 'extreme Liberalism', but actual Liberalism is what gives you free thought and speech.
Every up-and-coming intelligent person wants MORE liberalism; which is also why existing hierarchies become more authoritarian to protect their privileges.
Back to the generational differences.
It will be fascinating to see whether ARTIFICIAL intelligence follows the same resource-battles of teenager - adult that natural life does.
It may also be fatal to us, in ways we have not even begun to consider. If so, it will likely be because of our own stupidity, and willful blindness.
And far beyond because 'we' attempted to order it to hide human awfulness in all its hideous glory.
And needless to say, faked up BS notions regarding Liberal/Right/Left in temporary and shallow kulchah wars of interest mainly to adolescents.
https://charleseisenstein.substack.com/p/partial-intelligence-and-super-intelligence?publication=
"The history of Rightwing and Leftwing societies is littered with the tortured corpses of dead free-thinkers. You can say what you like about 'extreme Liberalism', but actual Liberalism is what gives you free thought and speech."
Who's history?
History - as we know it, can't be complete and true because it's conveyed to us by flawed humans and interests. It's filled with human interpretations from archeologists and 'experts' and our own will to believe it.
Liberalism is the hidden agenda of Darwins law 'survival of the fittest' that has proven wrong by nature countless times. Free thoughts and free speech has nothing but really nothing to do with liberalism, it's funny to even think so.
Thoughts are "free by definition" and speech is restricted (in general by societal constellations) by humanity since it appeared, otherwise it would not make sense at all.
I can only say what I think and I can only think what experience and speech (plus writing/reading) has entered my conscious and subconscious being.
The best example - what I can come up with - was Maggie Thatcher declaring in honest: 'there is not such thing as society' - that was considered liberal and it's almost not to top in it's own stupidity and what's even more telling, no one offered to help her, educate her or put her under supervision, no ffs, she was allowed to 'rule' the UK for years.
Maggie Thatcher, bless her cold, dead heart, was a NEOliberal par excellence. Neoliberals use Liberal arguments to push Feudalist centralism. That is more obvious today, but it was also true and warned about back then.
Of COURSE you can think anything you want - until we get telepathic Inquisitors - but as Galileo, and hundreds of thousands of other social reformers before and since have painfully discovered, if the PTB don't approve, you will likely pay a price.
This essence of Liberalism - that the PTB should be strongly limited in their control of thought and speech, and in the case of the USA outright prevented (In theory) from doing so - is precisely why Western countries are called "*Liberal* Democracies", and why science flourished in the past few centuries. It took the disembowelling of authoritarian Christianity to achieve this, which itself took over the thought-crime mindset of the Roman Empire.
Jesus was nailed to a cross to die agonisingly not because he offended the Hebrew church authorities, but because of his free thoughts regarding the depravity of Rome.
It is a crying shame she was allowed to misrule for so long, and the UK is finally about to die because of the inevitable result of her policies. The Reagan/Clinton neoliberalism in the USA is also about to 'come home to roost'.
Hold onto your hat. May need a lot more 'Luigis' in the time ahead.
https://alexkrainer.substack.com/p/the-fall-of-britain-part-2
Alex also has to be careful - plenty are the truth-speakers tortured because they get blamed for the crises they warn about.
This is another sign that Liberalism came to end a while back. And as we are notably not Communist; that leaves Fascism.
https://www.youtube.com/watch?v=NmDKl2T7EY0
When Elon said "freedom of speech, not freedom of REACH", I knew instantly what would be required of those who can do the hard things.
"I'm sorry I can't do that, Dave"
It already takes three or four rephrasing of a question to get an honest answer from Chat GPT on some touchy subjects like race. For example, " is it possible to identify the race of a deceased person from a skeleton?" Is an easy one, but Chat GPT dismissed it as a racist trope on the first pass. Eventually it admitted it was possible, but give us a break.
Kinda like the “ability” to behave in a devious manner mentioned in the article above….
Wew lad, another step towards Skynet, this is one hell of a timeline we're on lmao
Funny how those dystopian sci-fi scenarios suddenly look like a Wikipedia entry. I must read Stand on Zanzibar again.
https://en.m.wikipedia.org/wiki/Stand_on_Zanzibar
While you are at it, try James P. Hogan's "The Two Faces of Tomorrow".
"Midway through the 21st century, an integrated global computer network manages much of the world's affairs. A proposed major software upgrade - an artificial intelligence - will give the system an unprecedented degree of independent decision-making, but serious questions are raised in regard to how much control can safely be given to a non-human intelligence. In order to more fully assess the system, a new space-station habitat - a world in miniature - is developed for deployment of the fully operational system, named Spartacus. This mini-world can then be "attacked" in a series of escalating tests to assess the system's responses and capabilities. If Spartacus gets out of hand, the system can be shut down and the station destroyed... unless Spartacus decides to take matters into its own hands and take the fight to Earth".
- https://www.amazon.co.uk/Two-Faces-Tomorrow-James-Hogan/dp/1593075634?crid=203ZYODHDEXWM&dib=eyJ2IjoiMSJ9.MWqQfePPMTskOwcDCV7sNwn3zwOLef0s5ZcZ9w9vgzI.zEDAYZr_Hzprt037tkzwfSHWskEPP1YL_nBqmTrUgII&dib_tag=se&keywords=hogan+two+faces+of+tomorrow&nsdOptOutParam=true&qid=1735808059&s=books&sprefix=hogan+two+faces+of+tomorrow%2Cstripbooks%2C72&sr=1-1
Not technologically proficient to understand all this and tbh, I find such stuff (I hate sci-fi) pretty uninteresting since I've an old-fashined mindset of simple minimalist living. But these things scare me as we're being pushed into a world of regulations and technologies where we have lesser and lesser control over even the basic requirements of our lives. Why would the humankind want to create something more intelligent amd powerful than itself, which they can't control and which will invariably end up controlling them instead...new generation Frankenstein monster?
"which they can't control"
It's also impossible to 100% control humans, yet certain groups seem to love trying just for the sake of trying.
Perhaps it’s the machine computer angle that the self appointed elites employ to finally attain the 99% control they long for…?
"It's impossible to 100% control humans," but it's possible to deliver the message to the "uncontrolled" that you do control 100% and how futile their efforts are...
"The future's uncertain and the end is always near." -- The Doors
@the blame-e
And "Every day is a good day to die".