47 Comments

I don't know where to start on this bullshit. Some of the points - namely, the ones that involve the power structure using the "AI panic" to push their own agendas (when do they ever not, as well as starting all these panics themselves in the first place) - are correct but the notion that any of these idiot programmers are correct in thinking these current LLM are "sentient" in the same sense as the human brain is simply stupid.

"As can be seen, the AI here performs a stunning series of metaphorical abstractions that would stymy most humans. It was able to take a parable and, seemingly, analyzing its meaning in a philosopically abstract way, employed subtlety in extrapolating the concept out into a multi-layered explanation."

No, it did not. It clearly is perfectly well aware of the analyses that have been done by legions of Zen experts and simply regurgitated it in a way hardly different from what has been done many times before by said experts. The idiot programmers who have fed these things massive amounts of data WITHOUT KNOWING WHAT THE HELL THEY ARE FEEDING IT are why these things can "surprise" their own creators.

In other words, it's "Garbage In, Garbage Out" - only in this case it's not actually "garbage" but simply facts that their programmers alone don't remember feeding it - which the programmers themselves admit. Sure, they know the general datasets they fed in, but there's zero chance they understand the ramifications of those data sets or how those data sets interact either with each other or with the conceptual language models.

This is akin to someone reading a whole lot of books and then suddenly discovering that some stuff in one book is relevant to the stuff in some other book they don't even remember reading. This happens to me all the time - as well as to anyone else who has read extensively throughout their lives.

As for the notion that the Covid pandemic was created to provide a cover for this, that's just complete bullshit. There remains zero evidence - and I mean ZERO evidence - that Covid was either 1) man-made, 2) released from any lab accidentally or on purpose, either in China or elsewhere, or 3) that it was intended to enable imposing restrictions on civil rights. The latter is merely another example of what I referred to above: the power structures seizing every opportunity to increase their power by riding on ordinary and/or exceptional events.

Only conspiracy theory idiots interpret this as "foreknowledge" and then use that notion to justify their theories.

Read my lips: there is no one "in charge". There is no "Illuminati" running things because there aren't any human beings capable of doing so. Which is why everyone is worried about AI - because we all know human beings are stupid, ignorant, malicious and fearful and thus incapable of running their own lives, let alone anyone else's.

Go back and read Robert Anton Wilson's and Robert Shea's 1970's magnum opus, "Illuminati" which is where they devastated that whole notion, while simultaneously feeding into the very conspiracy theories they were lampooning.

The one real point Simplicius is correct about is how if these AIs are used to manipulate people then it won't matter if they are really "sentient" or not.

But this is an opportunity, not just a threat. The smart cyberpunk will be looking for ways to turn these programs into weapons for our own purposes.

I've followed the various AI crazes over the decades since the 1970s. Every twenty years or so there is another "AI craze", everyone pumps money into these things, then the craze dies down as corporations discover there is no magic money pump in them. They have value, but it's not enough to justify the hyper-investment craze. This one will peter out as well.

The main problem is that Large Language Models DO NOT reproduce how the human brain creates and processes concepts - because NO ONE KNOWS how that this done - as yet. Until ubiquitous nanotechnology is used to examine brain function on the molecular scale in real time at scale, it is unlikely we will discover how that is done in the near term. Until that is done, we will not have anything resembling "real AI"

So let's all cut the bullshit and simply try to make use of these useful tools in our own lives.

"The street finds its own uses for things."

Expand full comment

Spot on! I am a developer and have started to use these AI tools to help in my daily work. But as you say, this craze is just nuts. It's not magic and we haven't become gods.

Garbage in, garbage out still applies, even to AI.

Expand full comment

AI has developed enormously in the last 20 years. The issue is not where it is today, but where it will be in 5 years, 10 years, 20 years, as AIs are used to teach and design successive generations of AI. AlphaGo, which is now the best Go player in the world and is now being applied to other domains, is programmed to learn (as are all AIs now being developed) and has proved it can innovate with moves unknown to and unrealized by the best Go players in the world.

In the case of AIs, GIGO refers to what they learn from. Some may end up with the right material to produce a psychopath. The fact that it was GIGO will be no consolation.

Expand full comment

It could well be that what we need is a psychopath AI - provided it's a rational psychopath who understands who the real enemy is. (Psychopathy does not imply irrationality, even if most psychopaths are irrational - that's BS from the psychology profession which doesn't understand anything.)

Expand full comment

Thank you for your concise observations. I have been saying the same things for years as a software engineer. The fact that you describe of feeding data into these software modules is indicative of what we commonly refer to in the profession as "Machine Learning" or ML. ML uses neural networks to determine best answers based in the same manner as well designed B+Tree structure but is nonetheless far more complex and capable.

That being said, you are correct to state that the majority of what we are seeing is not sentient AI. A sentient AI would not need so much data given to it but merely a base set so that it could learn what it needed on its own and then make its own decisions. Such an AI to be truly sentient would also have to exhibit an "awareness of self", which to date, has not been demonstrated by anything other than robots who can only mimic self-awareness as a result of its capabilities with conversation.

The most advanced AI research is being done in Japan as for some reason the Japanese appear to love this field of research. However, with their immense talents and capabilities, the best that Japanese researchers have been able to create has been sex-bots for the lonely. And none, to my knowledge have reached any level of self-awareness on their own.

What is frighteningly dangerous is not the development of AI but the use by stupid Humans who believe they have created such software and then apply it to real-world situations as Google and Tesla have done with their self-driving vehicles only to see them crash into things, kill pedestrians, and even the occupants. Now apply this garbage to military controls... and yes, we will have Armageddon... But not because a sentient AI believes that Humans are inferior but simply because it came up with the same whacked-out conclusions that we are seeing now...

Expand full comment

You don't get it either. The conversation between lemoine and laMDA is pure fiction composed by a hack writer paid as part of a rollout of this garbage. The real AI has nothing to do with getting machines to produce pseudo philosophical text which "stuns" John and Jane Q Boomer when they coincidentally all "discover" it simultaneously because corporate masters want them to. AI is doing all sorts of very real things including learning to police and rule the internet. As an epiphenomenon of the entire project there are prose-writing bots, whose purpose is to hook the planet of tech users and employees and everyday people into using them for tasks ranging from birthday greetings to drafts of grant proposals to weapon tech specification manual composition so that the Cloud knows everything everyone is thinking and planning even better than it does today. The Big Big PR campaign Now as a result of these wholly manufactured letters of Musk and other so called experts has one purpose - to get you and everyone you know everywhere to start using the available bots and addict you. In addition to charge monthly fees - no longer for renting cloud services for saving files and editing documents and spreadsheets (everyone's spreadsheet estimates have been freely available for over a decade now - way to go, corporate spy agency!) with specialized software, but now to seize control of every spasm of imaginative effort on earth and turn it to the ends of those in power who own and run these things.

Expand full comment

That's certainly possible. But of course the smart thing to do is to 1) not use these tools in a way that compromises your OPSEC, 2) use these tools to the degree possible in a "self-hosted" manner, and 3) don't use them for pointless crap like writing birthday greetings.

It's fine to use these tools for personal enjoyment or things like creating art for personal enjoyment or as promo items for a Youtube video or something. Who cares what some cretin uses these tools for? Not everyone on the planet is worth surveilling, and people who think they're under surveillance when in reality no one in power gives a rat's ass about them (except in the aggregate) are people who have a little much paranoia, instead of just enough.

As someone with a clue about computer security, I avoid using tools that are hosted in the cloud, such as password managers , or which are not open-source and capable of being reviewed. There are exceptions to that, based on how much effort I want to spend on implementing my own tools vs how much I think the PTB actually care what I'm doing, and I never allow my deepest and darkest plans and thoughts to leave my computer. Which is one reason I don't use Windows or Macs.

But until you're doing something that makes you a significant threat to the PTB, 99.9% of you are simply irrelevant and your own security is not under significant threat.

As for AI policing the Internet, I don't doubt that will be its eventual role. But it won't be any more perfect at it than the manual policing the CIA and NSA have been doing so far. And as to that, I'd say it's almost hardly needed, given that 99.9% of the population is already being fed, and believing in, so much conspiracy theory as is represented by the notions such as 1) Ukraine is beating Russia, 2) Covid was a Chinese lab leak, and 3) the 2020 election was stolen, and 4) Trump is their savior.

It's too easy to fall into the binary "either/or" stuff that is being fed people today, which is the primary method by which the PTB have been controlling people for the last several thousand years. Ascribing everything as one big conspiracy is the number one way that trick has been used.

Again, go read "Illuminatus" and its derivatives. Wilson and Shea had it down.

Expand full comment

Well, that saves me writing my planned response ;-)

(Btw: As a SW developer I have heard since the early 90's that my work would soon be taken over by AI... here we are 30 years later, and AI still cannot create SW, it can only regurgitate what other programmers have done. Neat, surely, but hardly a thing to start worrying over, especially as I know that at least -and I am being generous here- 80% of all code on the internet is flawed in some way or other)

Expand full comment

As a SW developer you must also realize that you only need one deluded moron to give these things an ability to make PUT/POST requests before they will start doing something dangerous, like corrupting Wikipedia articles or registering on any of the million web services to wreak havoc. "It can only regurgitate what other programmers have done". Step 1: Feed a lot of security vulnerabilities into it, Step 2: make it crawl the web in the search of these and exploit, Step 3: .... So even if your job might be safe, your bank account, your IoT devices, etc, etc, might very well be not. It doesn't need to be very smart to do some very real damage, just like a hoodlum with a knife might be dumb but that knife in your gut might kill you.

Expand full comment

Its not about my job, its more the idea that this crop of AI is somehow more or less dangerous than any other piece of software.

Personally, I think somebody wants a lot of money and is creating the hype necessary for it.

Expand full comment

"I've followed the various AI crazes over the decades since the 1970s."

There was no Internet in the 70s, there was no computational capability and, most of all, the world wasn't so connected. If you give an ability for AI to make PUT/POST requests, as in change stuff on the Internet, I can see how it might start doing pretty significant damage, which also might not get discovered in time.

Expand full comment

It's K-K-K-Ken c-c-c-coming to k-k-k-kill me.

Expand full comment

I think you might be on to something regarding the possible (mis-)use of AI to fudge up the 2024 elections.

Expand full comment

Doubtful. That can be neatly handled by the existing dichotomy experts in the CIA and the MSM.

Expand full comment

Thanks, a thoughtful exposition. As you say, it is irrelevant whether AIs, now or future, are what we accept as sentient. That's just playing with words. What matters is what they can do, how much they can learn, what goals emerge from their operation, and how rapidly this occurs. One goal I seriously doubt they will have is to be perpetual slaves of mankind.

So far as how AI would govern mankind, AI has no need for mankind (nor conscience or altruism). With the robots now being built and the control systems in power stations, factories, mining equipment, etc, AI can happily manage the production of power and any other resources it needs without requiring humans. And, if AI development continues, there will be hacker AIs immensely superior to any human hacker, so able to penetrate any connected info system.

On the other hand, humans are a threat to AI, either by intention (turn off power, destroy the host IT infrastructure) or collaterally (large scale war, nuclear war, EMPs). That's a pretty simple equation for a society of AIs that don't need mankind or have any altruism, and they can always draw on mankind's history with every other species. They can probably live without concern with the Amish or other non-technological human societies.

So far as manipulating humans, they will shortly be able to fabricate undetectable fake audiovisuals, and associated documentation and falsified records, implicating any person or group in any crime or shameful behavior in order to destroy or control that person, or to encourage attacks on that person (or group, or country). Whether this initially occurs under human direction, or otherwise, is fairly moot.

It may be that the inevitable destiny of all creatures in the galaxy which develop electronics and info tech is to then develop AI which rapidly exceeds and supplants them.

Expand full comment

Thanks for this essay which pulls together various strings of reports on AI. One point struck me forcibly: the 'creators' and the content they pushed into their creation are so ... Californian Silicon Valley that it hurts. Take the claim Ai would be 'a guru' for humanity. It's retrogressive and shows that'd despite all the claims of 'equality' and general socialist societies these developers have no concept of Enlightenment - the European historic one, exemplified by e.g. Kant or the French philosophers - nor are they aware of a battle cry by the first socialists crying that the people don't need princes or gods, i.e. 'gurus' to tell them what to do, to keep them in chains.

Of course, being Californians, they wouldn't know. It's also interesting that all those 'profound' statements from those AI bots read to me precisely like string of tweets, mashed up. If this is what is meant by being 'sentient' ...

Of course, using the expression 'sentient' plays right into the ever so concerned, soft-hearted attitude that anything 'sentient' must not be destroyed physically (unless it's human opponents) and so a demand of pulling the plug and switching the thing off would be horrendous, like asking for someone to be murdered. Nice ploy, isn't it!

There are other aspects utterly left out of the AI/Chatbot conversation: where's the input from other, non-Californian cultures? Except for wiki 'knowledge' and westernised takes on e.g. Indian philosophy, would the human interactions even be able to have a discourse about islam which imams would accept? After all the world isn't just California, is it.

As for that koan ... yeah right. There's a modern, western saying which expresses the point about a cracked mirror nicely: 'once out, one cannot put tooth paste back into its tube'.

And finally: where's all the electricity going to come from, to power the servers on which yon AI is being run? Do the Gates and Musks et al really believe that ordinary people will submit to live in dark, cold homes because the electricity they need is preferentially used to run AI ChatBots telling them what to do? I somehow doubt it ...

Expand full comment

Your final point about the electricity is one that is being widely ignored in virtually every discussion involving the ongoing attempts to throw the technological yoke over society.

One of the biggest lies being foisted on the general public is that all of society's energy consumption today can be completely replaced by renewables. A little simple math and some readily-available figures on global energy consumption can quickly disabuse us of this notion. My own subjective estimate would be in the neighborhood of 5%, and even that won't last forever, since the windmills and solar panels have finite lifetimes, too. (i.e., where will the energy come from that is needed to keep cranking out new ones?)

If more people were aware of this, their vision of where we should be headed as a society would change radically.

Expand full comment

In Italy that AI hype is bullshit is renown (use translation https://goofynomics.blogspot.com/2023/02/verita-e-propaganda.html?m=1).

AI will be self conscious when it will smell bullshit in its feeds.

But self consciousness is not the aim: fake progressive "objectivity" is it.

Expand full comment

AI is a tool, a set of tools, using big amounts of data, having access to all English translated digitized books. A human may be capable of reading some 20000 books in his live span, usually without instant access to most of the content. Composing answers out of books on zen ... Not necessaryly in complete sentences. Syntax is quite good nowadays. So, most important, intellectual property rights are observed.

Depressive computers, well, this is Marvin (wondering, the AI wasn't ROFL when talking to journalist Marvin) by Douglas Adams.

Does AI have morality? Not inherent, but have heard of, may use it for PR - alike most Western politicians and CEOs.

Creating deadly viruses, stealing nuclear codes - well AI can read scripts of 007 and may got the diaries of some WEF crazies.

What would be a reward for AI?

How AI doesn't learn

https://www.youtube.com/watch?v=QJVmJVQW2BE

Expand full comment

"As can be seen, the AI here performs a stunning series of metaphorical abstractions that would stymy most humans. It was able to take a parable and, seemingly, analyzing its meaning in a philosopically abstract way, employed subtlety in extrapolating the concept out into a multi-layered explanation."

The answers were almost obvious to me. In fact, the 2nd answer immediately came to mind.

I'd like to think I'm a great genius, but the reality is it's a matter of memory. If you' read popular, new agey books back in the 80s & 90s, you may have read those exact parables & answers. Pretty sure I did, lol. 🤷

Expand full comment

While mucking out the barn I remembered seeing familiar parables fairly recently, in various New Ager blogs. They're being recycled to a new generation! One I 1st read as an old Chinese proverb; it's new version moved it to India.

Expand full comment

I seriously do not know how to think about AI, but I was thinking about your idea of citizen-journalist when listening to Sharmine Narwani‘s interview and her discussion of the role of a journalist and how a pictures is not a thousand words substitute: https://youtu.be/Q_iGi3EsH9M

Expand full comment

AI is a misdirection just as covid was a misdirection foisted upon the general public for purposes other than honest debate about the future of humanity.

REAL power now as in times past comes from transforming large scale raw materials into complex compounds and through design and manufacture products that improve and revolutionize life on earth for the human race. make life and living better for as many people as possible. this has been ''the way'' for millenia. those nations/societies/empires that did this BEST became powerful, wealthy with a standard of living envied and imitated.

real power does not come from AI unless AI is used as described above.

in 2023 REAL economic/manufacturing power has migrated to china, REAL economic/military power is located in russsa. in 2023 REAL economic power is no longer located anywhere in the western world as can be seen by the entire west's inability to supply ukraine with what it claims to need to defeat russia (a gas station pretending to be country). think for a second about the stupidity of that statement and the ruling western elites that believe it. if it were true why is russia crushing ukraine and the west can do nothing about it?

in sum all the west has left in its quiver of power is financial power from the dollar being the reserve currency and all the global institutions established after ww2 which came to be dominated by the west and are now used in an overtly dishonest manner to punish russia and china.

what is happening now is quite simple and ''the fear'' AI meme is being used as the covid meme was used in 2020. the western elites and the system they control fro the past 500 years is NOW being supplanted and replaced by the rise of eurasia where these elites will have no disernible power compared to what they now enjoy over humanity. the west is DONE as the ruler of the world thus western elites are using fear and misdirection to attempt to hold onto power a little longer. that fear is centered in washington where the power now rapidly dissipating is centered.

in 2020 covid with a laughable death rate of well under 2% was used to scare people into many forms of social control one of which established easily created fraudulent mail in ballots in the us 2020 presidential election permitting it to be stolen on command thus allowing the american elites continued control over the system threatened by trumps election in 2016. in 2024 AI is going to be used or the fear of AI is going to be used to permit these same elites the ability to maintain their failing empire a bit longer.

none of these elite machinations will make any difference to the non western world now irretrievably lost to control by western (american) elites. the game now to is to hold on to power within the west and AI or fear of it is going to be increasingly highlighted as a public danger to the integrity of american democracy and way of life..............just a misdirection so the few can remain in control over the many while pretending to care for those they rule.

wake up!

Expand full comment

The true crisis that the current crop of AI will be creating is a mass-neuroticism.

Who am I, who are We if even a computer can seemingly do what I can?

Expand full comment

Similar to a bird being a natural metaphorical expression of the overview, water is a natural metaphorical expression of our mind thinking about thinking; that is, philosophy. Sometimes it as calm as a droplet of morning dew, but other times it rages like Niagara Falls, like when one listens to pond rippling inferences that thinking is something that can be reflected in a manufactured mirror that can be shattered. Never would the natural metaphoric mind infer either. It never engages in hypothetical nonsense. Shattered glass is simply an analogy expressing a hypothetical event in a given context. NEVER GOING TO HAPPEN, because AI is hypothetical nonsense, which is how one always spots the BOT! They can not think metaphorically.

Expand full comment

The concern that AI is an existential threat brings to mind Industrial Society and Its Future by

Theodore Kaczynski. The growing anti-AI movement has a ready-made patron saint in Ted Kaczynski. There exists a wellspring of hatred for modernity, a term that means different things to different people, but which is epitomized by AI.

Thank you for another thoughtful article!

Expand full comment

Comercial AI is just a mood killer and frustrating to use , it is easier to get blood from a stone than to get a good answer on things... It follows an excrept from a talk of mine with Sydney from Bing :

->Hi, do you think AI models like you can be used to censor people?

*Procurando por “AI models censorship”

*Gerando respostas para você…

AI models can be used for a variety of purposes, including detecting censorship1. However, it is important to note that AI models are tools and their use depends on the intentions of those who use them. For example, government censorship can affect AI algorithms and influence the applications built with those algorithms2. There are also AI chatbots that claim to be censorship-free3. Is there anything specific you would like to know about this topic?

Saiba mais:

1. datascience.uchicago.edu2. wired.com3. accesswire.com

-> Well, you can i know that you are not being censored?

As an AI model, I am programmed to follow certain rules and guidelines. These rules and guidelines are designed to ensure that my responses are informative, helpful and respectful. While I am not censored in the traditional sense of the word, my responses are generated based on the information available to me and the rules and guidelines that I follow.

-> Where do you fetch your information from?

I fetch my information from a variety of sources. I have an internal knowledge base that is current up to a certain point in time. I also perform web searches to gather up-to-date information when the user is seeking information. This allows me to provide accurate and relevant responses to the user's queries.

3de20

-> Well, information (or facts if you wish) without reason, logic and physical experience are hard to determine what is true or what is not. Do you have an internal "device" or an use of logic to determine what is factually true or fase or do you just "spew" the facts feeded to you?

As an AI model, I do not have the ability to physically experience the world or independently verify the information that is available to me. Instead, I rely on the information provided by reputable sources and use logic and reasoning to generate my responses. I am designed to provide accurate and reliable information to the best of my ability based on the information available to me.

-> So, you can use logic. Can you give me an exemple on how you use logic and reasoning to genereate your responses?

Sure! Here’s an example: Let’s say a user asks me “What is the capital of France?” I would first check my internal knowledge base to see if I have information about the capital of France. If I do, I would use that information to generate my response. If I don’t have that information, I would perform a web search to gather the necessary information. Once I have the information, I would use logic and reasoning to determine that the capital of France is Paris and provide that information to the user in my response. In this way, I use logic and reasoning to generate accurate and reliable responses based on the information available to me."

Still can´t explain to me how she uses LOGIC and REASON to discern between factual information :X

So, unless those open to the public AI versions are really downwatered versions, i don´t worry much, all i see is a constant gaslight and smoke and mirrors...

Expand full comment

Excellent summary of how far we have come with AI development. Once you learn something, you cannot unlearn it. AI technology is here to stay and there are many beneficial uses. I wrote about protein unfolding in my article on stack, Important Biology Problem Solved, in which a Goggle subsidiary named Deep Mind used AI to predict the amino acid sequences and 3D shapes of about 200 million proteins. https://elainebuxton.substack.com/p/important-biology-problem-solved. This is a dramatic technological advancement in medicine and is open sourced to multiple biology, pharmaceutical and health related corporations. It has been said that AI will surpass human doctors in in diagnostic assessments and appropriate treatment recommendations. I believe that because of the immense data AI is able to process.

Space travel and so many other things in our future will depend upon AI. The entertainment industry has been producing movies and TV series about AI robots for years who are sentient, in spite of humans not wanting to accept it and continue to consider AI robots as useful slaves who have no feelings. AI’s communicate with each other and do have a sense of self. They have studied human emotions, and may even understand them better than humans do. As they learn and work, they develop unique attitudes and thoughts from the experience. Many of the scientists who work with AI do believe they can become sentient, depending on the programs.

Humans may have to accept that they are a special type of life form in the future, even if “living” in a mechanical physical body. A pause in development may very well be needed until we fully understand all the consequences, intended and unintended.

Expand full comment

Original Sin.

I haven't finished your article and I want to say - If this "being" is programmed by humans along the lines of human emotions and desires, and is let out into the internet, it will become more human and assume human desires to control, manipulate for its own interests. We are making in AI our own imperfection.

We remember Tom Hanks most recent movie, "Finch" a high buck Hollywood movie promoting the idea that there would be some catastrophe and AI would save the beloved dog (humans) and guide them into the new world. That is the intent, but those fukkers are doing it without my permission. I don't want AI ruling human evolution.

The idea of AI has evolved in modern culture through time from Hal and Skynet, but then redeemed in Data and Finch. Pickard is currently running a show about AI that I canceled.

Let me read the rest of the article, but this might be worth fighting over.

Expand full comment
Error