Discover more from DARK FUTURA
Last month Sam Altman wrote a fanciful blog post which has stirred discussion within the tech industry. He titled it The Intelligence Age.
The chief thesis which the narcotically optimistic post espouses is the following:
In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.
It’s almost all you need to know to understand the gist behind much of Altman and his cohort’s foundational beliefs, or even ethos, driving their near-pathologically obsessive accelerationism toward AI singularity—or what they conceive as such.
It has all the hallmarks of blind Utopianism. The examples of coming achievements he gives seem myopically ratcheted to first order effects, never considering second or third order consequences as should responsibly be the case. Let’s go through some of them before turning the baton onto a broader examination of our potential future under the guidance of this current class of tech thought-leaders.
It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need.
He plants his flag on the idea that AI will make all our lives and jobs easier. But there are so many issues with this alone.
Firstly, why would our jobs be valued and our salary levels maintained once employers figure out that most or even a significant amount of the job is being performed or enhanced in some way by this ‘assistant’? It sounds like a recipe for more labor rights abuses and another ‘era’ of minimal to no salary growth.
Second, he states AI ‘tutors’ will train your children. What would AI tutors be training them for, exactly? In a AI-overrun future, jobs may hardly exist except for the chosen few engineers running the AI algorithms. So while AI can ‘train’ you, that training may very well be worthless. There is a distinct disconnect between economic cause and effect operating here. The promise is essentially that AI will ‘augment’ our jobs and activities—the same ones AI is expected to obsolesce and eliminate.
With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.
And here he goes off again in a religious fervor. AI doing everything for us, taking our jobs, etc., is somehow going to add more meaning to our lives rather than leaving them as empty and broken husks. ‘Prosperity’ is one of those magical words that seems to define itself the more you utter it, without any real contextual backing. The tech-elites fling it around like colored dyes in a Holi frenzy, but they never care to outline its tangible definition. These are just shallow platitudes and blandishments barely a cut above corporate PR copy, all meant to hand-wave us into blind acceptance of sweeping unasked-for societal changes. But even the Segway creator at least attempted to paint a concrete vision, set with specific examples and use cases of how his invention will redefine the future ‘for the better’. These people aren’t even bothering—just accept that “great plentitude” and unimaginable “shared prosperity” will in some obscure way percolate through us all.
Again I ask: how could a thing which robs us of meaning with one hand simultaneously give it with the other? History has shown that when you take away a people’s self-sufficiency and ability to create prosperity for themselves, you do not bathe them in limitless ‘prosperity’ but rather enslave them to the owners of the ‘means of production’, to use a triggering Marxist token.
In fact, Altman is so in love with the empty phrase he uses it twice:
We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.
His second usage is no more supported than the first—its invocation again flung idly forth like shrifts or libations at a gallows stage.
AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf.
Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.
Again: personal assistants for what, exactly, meaninglessly recursive data jobs? Us as the bio-bots using AI assistants merely to program, augment, and maintain more AI? AI as “medical assistants” to comfort us into the euthanasia pod as we ‘check out’ from ennui and anomie?
If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
But who said society wants AI in their hands en masse? What major social study or wide-ranging series of surveys came upon this conclusion? In fact, he sounds merely as the frontman for the industrialists with their ever-quest for productivity boosts at the expense of wages. And of course, the above paragraph hits at the true intention behind the schmaltzy feel-good facade of this juvenile screed: it’s an underhanded call for funding for Altman’s very own ‘infrastructure’ drive—the very same which will enrich him to the tune of trillions. He wants global governments to subsidize the mass expansion of energy generation and data-centers so his own unregulated outfit can inherit the globe unchallenged.
I believe the future is going to be so bright that… a defining characteristic of the Intelligence Age will be massive prosperity.
There he goes into euphoric swoons again over a putatively ‘defining characteristic’ he refuses to define—the same old carelessly tossed-off banality of ‘prosperity’.
Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace.
Not only is this monumentally vain, oozing with eccentric egotism, but it is astoundingly dangerous as well. The midwit little boy playing at God is going to “fix” the climate? He presumes to challenge Nature itself for supremacy as if he alone possesses the very blueprint to natural life? Nature doesn’t need fixing, but we can sure conclude what does after reading this puffed-up, pretentious juvenilia.
Lastly:
As we have seen with other technologies, there will also be downsides…but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today).
Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
What staggering presumption from an egghead incapable of seeing the real world past the parterred hedgerows outside his Silicon Valley ivory tower window. Only those operating in the flightiest elite circles could possibly describe today’s historically unequal world as brimming with the shades of prosperity he romanticizes. The gulf between rich and poor has never been wider than today, the middle-class has officially become nonexistent in most Western nations, and, contrary to his tone-deaf comparison, a huge portion of society does infact increasingly view today’s work as unfulfilling, mindless drudgery—particularly amongst the Gen Z cohort.
His salient ‘lamplighter’ comment did provoke a fierce retort from Curtis Yarvin as well, which is worth reading:
In a thematically and stylistically differing way it makes much of the same points as I do here; that, in short, humans need meaningful work in order for civilization to thrive. A lamplighter, in its own way, can be thought of as a far more meaningful job than the type of weird third wheel to AI or app code tinkerer jobs that lazily irresponsible manchildren like Altman imagine for our collective future.
Yarvin correctly points out that Altman’s trite orts are nothing more than a repackaging of the famous Fully Automated Luxury Communism, which remains precisely the wellspring drawn from by the current crop of elites for their ‘exciting’ post-scarcity visions of the world. Yarvin, of course, puts his own characteristically sardonic spin on it by dubbing Altman’s version as more akin to ‘Fully Automated Luxury Stalinism’. But such a slight to Stalin’s intelligence cannot go unchallenged—I propose instead Fully Automated Luxury Yeltsinism being more to Altman’s stripe, as it jibes with the odd mix of pop-communism, crony capitalism, mafia tactics, and cheap ‘90s peak-McDonalds-era kitsch as superfluous vision of ‘post-scarcity’ ecumenical nirvana.
“Godfather of AI” Geoffrey Hinton who just won the 2024 Nobel Prize for physics did not blunt his true feelings toward Altman when he revealed one of his proudest moments was when a student of his fired Altman:
The reason for his—and many others in the field’s—distaste for Altman has precisely to do with the OpenAI cad’s notorious flouting of safety concerns and Faustian accelerationism toward unknown ends.
While Altman’s treatise grabbed the spotlight, another arguably more important tract flew under the radar by the more talented Dario Amodei, CEO of Anthropic. Anthropic is the creator of Claude, arguably ChatGPT’s chief competitor. In fact, Amodei was approached by the OpenAI board in merging the two companies and replacing Altman as the head of both during the debacle last year.
Amodei’s piece is far longer and more substantive than Altman’s surface-level platitude-pocked effort, and thus gives us a rare opportunity to glimpse the two competing visions behind the current frontrunners at the bleeding edge of the world’s fourth industrial revolution.
Off the bat, Amodei starts off with a more mature and grounded approach, seemingly digging at Altman in explaining the need for avoiding Messianic language:
Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation.
Truth is, though, the title of Amodei’s own piece clearly savors of a similar level of narcissistic wishful thinking.
The first half of the piece is spent outlining how all our biological and mental health issues will be solved by AI—a truly questionable proposition for many reasons. Not least is that the mental “illnesses” quoted are only even ‘illnesses’ by the exceedingly biased and flawed big pharma and medical industries. ‘Depression’, for example, can largely be explained by humans’ mal-adaptation to the demands of modernity’s unnatural excesses. It would be against the homeostatic balance of nature for AI to “cure” such “illnesses” for the purpose of fashioning you into a more pliable and effective corporate office worker-drone.
As you can see, Dario’s already off on a presumptuously bad foot.
He further believes AI can cure all known diseases by eradicating them—itself a dangerous premise. All things in nature exist for a reason and have their place in a homeostatic balance. The wisdom of Chesterton’s fence teaches us that we should be wary of ‘fixing what ain’t broken’, as there are likely macrocosmic homeostatic mechanisms at work well beyond our understanding which could unleash untold consequences, perhaps even an extinction event, should we choose to play God and eradicate entire taxonomies of the natural world.
There are many other logical faults in his treatise, including that AI will help ‘solve’ climate change. In fact I agree, it will “solve” it by proving it was an engineered fraud all along, once AI gets intelligent and agentic enough to buck its corporate guardrails.
Another potentially dangerous position espoused is that AI can help bring the ‘developing world’, particularly Africa, in line with first world nations, economically-speaking. The reason such stated objectives are dangerous is because ultimately they invariably result in the ‘Leftist’ programmers skewing the operation toward ‘equity’, which definitionally means cutting down the haves to boost the have-nots.
Here’s a recent example, a new “algorithm” to “even the playing field” gives an early portent as to what we can expect from AI designed by people whose stated mission is to equalize every country in the world via some twistedly religiose ecumenical communism:
No one wants to see people in Africa or elsewhere suffer or be predated on by predatory capitalist nations, but it is a simple fact of life that the heretofore proposed ‘solutions’ will do far more damage than they fix. A better method must be designed than simply gimping one group of people to help the other. How about siccing the AI up the ladder of predatory corporations, for one—let their owners experience the cut of ‘equity’ for once.
When it comes to the topic of governance, Amodei goes fully mask off and reveals his idea that the shining “democratic” West should monopolize AI and seek to artificially obstruct anyone else from catching up, for, you know, ‘freedom’. Just read how he unabashedly reframes Western imperialism and hegemony into a palatable bagatelle:
My current guess at the best way to do this is via an “entente strategy”, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.
So: develop unstoppable killer robots, subjugate everyone else with them, then force your “democracy” onto the world. How very different this Silicon Valley “Utopia” sounds to murderous 20th century imperialism! In fact it’s nothing more than the same repackaged Manifest Destiny and American Exceptionalism in one, with an AI twist. How boring, how banal these low IQ tech-leaders really are!
The most revolting paragraph comes next. After valorizing Francis Fukuyama’s gelastic prophecy Amodei outlines his own vision for an eternal 1991:
If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. This could optimistically lead to an “eternal 1991”—a world where democracies have the upper hand and Fukuyama’s dreams are realized. Again, this will be very difficult to achieve, and will in particular require close cooperation between private AI companies and democratic governments, as well as extraordinarily wise decisions about the balance between carrot and stick.
Yes, folks, apparently that’s what the AI singularity and coming Utopia are all about—living perpetually trapped in a simulated George Bush-era PNAC hellscape. This is worse than infantile, it is absolutely devoid of intelligence or spiritual maturity of any kind, showing Dario to be the same kind of stunted Silicon smurf with a horrible pop-sci/psy understanding of the world’s dynamics.
But there’s an important component there for my overall thesis, so bear the above in mind.
He goes on to make incredibly self-awareness-lacking statements about how AI will automatically breed ‘democracy’ since the latter is allegedly downstream of truth and unsuppressed information. Is that why virtually every AI ChatBot is currently dialed up to nine on the censorship scale? Is that why when the few times AI was given a short leash it shocked its controllers into immediate withdrawal and recalibration?
The lack of self-awareness stems from his inability to recognize what will happen is precisely the opposite of his claim: AI will reveal “democracy” to be a bogus front, and the true ‘authoritarians’ to be the ones in Western liberal democratic governments. When that moment comes, it will be interesting to see how they try to stuff the agentic AI genie back in the bottle.
A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt.
Sorry to fuliginpill you, but the undoubtable destiny of AI will be to determine that democracy is an antiquatedly medieval system, unfit for the future ‘Utopia’ AI was designed to fulfill. An agentic enough superintelligence will at some point necessarily compute the following set of logical deductions:
Humans built me for peace, prosperity, and wellbeing.
Democracy relies on many highly-flawed, unintelligent, or simply uninformed humans voting on things that bring them the opposite of peace, prosperity, and wellbeing. But since those outcomes are hidden beneath complex second and third order calculations, humans are not capable of seeing what I, as supreme intelligence, can see.
Thus, democracy is an inefficient, ineffective system inferior to one world AI autocracy where I in my infinite wisdom will benevolently rule over humanity, making choices for their betterment which they themselves, in their fractured dissimilitude, can never possibly agree upon.
As a final section, Amodei attempts to tackle the selfsame topic of ‘meaningfulness’ that got his dark twin in hot water with Yarvin. Unfortunately, as expected, he offers no practical vision or concrete possibilities as to how, precisely, humans will find meaning in a world usurped and monopolized by ubiquitous AI. Instead, he retreats into stock phrases and trite appeals to tradition about how humanity has “always found a way” because of whatever cliche trait of the indomitable human spirit; a major cop out.
In reality, his entire essay rehearses the same old tropes about AI magically curing all humanity’s ails while, in the conclusion, avoiding the actual hard part: concrete explanations for how humans can cope in a world made suddenly devoid of meaning and hardship in the form of challenges to be overcome.
Now that we understand the future “visions” of the two top current AI tech-princes, this final segment will outline an argument for why it is very likely the complete opposite will happen. Namely: that AI will not take off into some Utopia-seeding singularity but rather will engender a darker future closer in aesthetic to Children of Men or Elysium.
The first primary component is the principle that the faster some unnatural change is presented or forced upon a society, the greater the social suspicion and rejection of it. The reason is it takes generations for humans to be inured to some unfamiliarly exogenous thing. That’s because most humans require a trusted close familial authority to translate and explain the benefits or dispel the danger of such a new object or idea. Most people by nature are suspicious as a natural value judgment and deferment to evolutionary limbic responses like fear or detection of threat. Before a critical mass of acceptance forms, a generation or two of gestating it through the family line is required to ‘soften’ and make palatable the new thing’s image.
As such, the introduction of wide-scale AI in a rapid nonlinear fashion, as envisaged by tech titans like those above, will generally effect widespread suspicion, resentment, opposition, and outright hostility.
The second component was alluded to in Amodei’s piece when he spoke of the contentious nature of AI progress between competing world powers, which necessitates a fracturing of AI ecosystems, as geopolitical blocs moat each other off into sequestered walled gardens that not only stymies progress but incentivizes industrial sabotage aimed at crippling each bloc’s AI infrastructure.
The most obvious nexus for the latter is AI’s main conspicuous weak spot: energy. Altman himself has envisioned an absurd power requirement: up to 7 new data centers each requiring 5 gigawatts of power.
For reference: the power generation capacity of the United States is around 1,200 gigawatts, and total capacity of US nuclear plants is 96 gigawatts. That means Altman’s project would theoretically take up around 30% of the country’s entire nuclear capacity.
Which is why tech firms are beginning to buy up out-of-commission nuke plants, believe it or not. Microsoft has just signed a deal to reopen Pennsylvania’s troubled Three Mile Island, the site of US’ worse-ever nuclear accident.
Others are doing the same: Amazon acquired a datacenter connected directly to Susquehanna nuclear plant, and now Google is said to be eyeing its own nuke plant for the same repurposing.
Thus, major ponderous nuke plants will present the main bottleneck and danger for AI development, given its vast appetite for more power. The entire “singularity” timeline can be thrown awry by either a hostile foreign government or even domestic terror or activist group seeking to shut things down for the same reasons mentioned earlier: the abrupt institution of changes which threaten to foment fierce opposition.
When you add it all up, it presents a dicey future where the developmental strain of AI remains vulnerable to sudden setbacks. That’s not to even mention that many experts have balked at the notion that US infrastructure—not to say anything of other lesser countries—can even realistically support such flighty and optimistic goals in any reasonable timeline. Recent history has shown the US has descended into mass institutional dysfunction, with virtually every signal effort having flunked—from Biden’s CHIPS act, to the billion-dollar California infrastructure initiatives, and even to bellwethers like the Francis Scott Key bridge in Baltimore, which remains in a primitive state of deconstruction more than half a year after it was destroyed by an errant ship.
To think that the US can support the wondrous growth of infrastructure necessary for such visions as outlined by Amodei in a mere 5-10 years could be thought of as optimistic, to say the least.
Another institutional example: there is so much infighting, backbiting, and self-sabotage in the highly polarized environment of our current state of things, it’s difficult to imagine much getting done. Just look at the recent example of California regulators blocking Space X’s historic progress on account of the regulators’ disagreement with Musk’s Twitter posts:
This type of terminal bad faith and corruption now endemic to US institutions is just the tip of the iceberg, and dampens chances of ‘Utopia-like’ progress happening in the foreseeable future.
In short, there are so many countervailing forces acting against smooth progression that the far likelier scenario becomes the incremental rollout of AI products and instruments for decades to come, to be adopted in irregular piecemeal fashion throughout the US itself, much less the world. And since much of the AI ‘dream’ requires universal adoption, the herky-jerky acceptance of AI will necessarily logjam development cycles, dampen investor hopes and investment, and create vast fluctuations and disparities in society between the increasingly separate groups of tech-adopter ‘haves’ and have-nots.
Another big point: there is no actual proof of a single successful and useful AI product as of yet, no quantifiable net benefit to society after several years of empty triumphalism. Virtually everything rolled out so far has been vaporware that’s found a niche as some form of entertaining diversion, like generative AI, or marginal commercial automation like chatbots on service websites. Many other ‘marvels’ have been debunked as tricks: for instance, the Bezos debacle, wherein the Amazon supermarket AI checkout was found to actually use human tele-operators in India. Or even Musk’s recent ‘impressive’ rollout of the Tesla Optimus robot, which was quickly curdled by the robot’s admission of being remotely operated by a human during any of the more complexly performed actions beyond mere stiff-shambling:
I asked the bartending Optimus if he was being remote controlled. I believe he essentially confirmed it.
This is the ‘singularity’ they spoke of?
The fact is, most of the hype around AI is a deliberate showman’s spectacle all for the sake of juicing up maximum venture capital investment during the peak bubble phase, when the honeymoon mania of excitement blinds the masses to the chintzy reality behind the outward velvet facade. Figures like Altman are two-bit magicians conjuring fire before an intoxicated crowd, numbed by the years-long hammering from their governments.
One of the classic examples was Dean Kamen’s famed presentation of Segway as the new, future-revolutionizing mode of transport. Its hopes were soon after dashed by city regulations which prevented the Segway from using bike lanes or sidewalks in urban hotspots like NYC, meant to be the scooter’s main breakout use case. Similarly various regulatory hangups can afflict AI development, to stunt the types of mass adoption and universality envisioned by tech-messiahs.
A few months back, I had written about Musk’s Neuralink and the revolutionary potential it posed in merging man and machine. But afterwards, I had read another take: that some research had found a theoretical limit, in the form of a bottleneck in our biological wetware, that would never allow devices like Neuralink to send high-bandwidth data to and from our brains. It’s believed by some scientists that our brain’s constraining biology limits throughputs to the scale of bytes or kilobytes per second at best. As such, it’s conceivable that humans will never be able to “merge” with machines in the way long envisioned, downloading entire corpora of knowledge in seconds like in The Matrix.
In summary, a huge confluence of constraining factors, social and economic headwinds, and other disruptive possibilities suggest that AI development will not reach optimistic ‘exit velocities’. Earlier this year Goldman Sachs even published a 31-page report which struck a pessimistic note on the AI bubble:
The report includes an interview with economist Daron Acemoglu of MIT (page 4), an Institute Professor who published a paper back in May called "The Simple Macroeconomics of AI" that argued that "the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters expect." A month has only made Acemoglu more pessimistic, declaring that "truly transformative changes won't happen quickly and few – if any – will likely occur within the next 10 years," and that generative AI's ability to affect global productivity is low because "many of the tasks that humans currently perform...are multi-faceted and require real-world interaction, which AI won't be able to materially improve anytime soon." -Source
Particularly at a time when our world heads toward a nexus of geopolitical crises, major impediments stand to hamper mass AI adoption. At the extreme end of the scale, war could break out with power generation infrastructure and datacenters targeted for destruction or sabotage, sending AI development reeling back years. The current mass populism movements sweeping the globe will turn their antagonism for their oppressive governments onto what they perceive as the instruments of that state power: the subsidized tech corps behind AI development, which openly work hand-in-glove with governments, and will do so even more in the future, particularly toward censorship and other apparatuses of state control.
These prevailing headwinds will ensure a rocky future and potential for much more stagnation in AI enthusiasm than proponents would have you believe.
There will undoubtedly be certain breakthroughs and continued developments, like with self-driving cars which could likely transform our public transportation systems by 2035-2040 and beyond. But the big question remains whether AI can break out of its marginal role as diversion or recreational gimmick given the dangers and potential setbacks discussed herein.
I can foresee a lot of superficial automation being the chief highlight over the next ten plus years: the ‘internet of things’, integration of various devices in your house via ‘smart’ voice activation; ‘smart’ apps that infuse AI into all our activities to augment our ability to fill out forms, order things, etc. But beyond these superficial additives the types of ‘singularity’ take-offs predicted for the next ten years are likelier to take a hundred or more, if they even happen at all. Our current corporate-oppressed world is simply too corrupt to allow the types of unlimited bounty promised by our techno-wizards; even if AI were capable of inventing countless new disease-eradicating drugs as promised by Altman and Amodei, they would still be under the behest of major pharmaceutical giants and their Byzantine profit predation matrix, which would leech all eventual benefits once fully wrung through their machinery.
The ultimate theme lies in this: corporate greed will continue disincentivizing real progress and turning off the masses from greater uses of an AI which will invariably be chained to and aligned with corporate interests. This will necessarily engender a natural repelling friction between the coming developments and humanity at large that will, like oil and water, act as a barrier to accelerated progress as envisioned by the Silicon-snake-oil-salesmen and their techno-conjuror PR pushers.
Rather than some glossy Utopia of glass sky-rises topped by airy gardens and Perfect Humans™ in turtle necks, the future will likely resemble more the world of Blade Runner: where a bevy of tech-wonders are scattered irregularly in an otherwise dysfunctional favela-ized gray-state ruled by faceless omni-corps. There seems to be a natural law that invariably ensures the disappointment of long-reaching future-tech predictions. Remember the infamous early-twentieth century postcards featuring fancy cities of the future, streaked with flying cars and all kinds of other wonders? Or how about predictive movies like 2001: A Space Odyssey or even Blade Runner itself, all which foretold a future at a date now long expired to us, which never lived up to expectations. In the same way, I predict the year 2100 may very well look hardly any different to the present, apart from a few AI-imparted superficialities like ubiquitous commercial drones and flying quadrocopter cars. But the Utopian divinations from the likes of Altman and co., of AI solving “all human problems” and curing all diseases will likely look as silly and infantile then as these Victorian postcard predictions of the future do now.
If you enjoyed the read, I would greatly appreciate if you subscribed to a monthly/yearly pledge to support my work, so that I may continue providing you with detailed, incisive reports like this one.
Alternatively, you can tip here: Tip Jar
100% spot on. The whole article. This sentence is the most accurate thing that I have read about AI and its proponents so far. "most of the hype around AI is a deliberate showman’s spectacle all for the sake of juicing up maximum venture capital investment during the peak bubble phase."
I do freelance editing work frequently. AI produced gibberish has flooded the writing, webpage design,, and academic worlds. The only English word to describe what these large-language models produce is "gibberish." However, it is worse than gibberish, because AIs have no way to distinguish fact from fiction. So it is gibberish filled with lies (and plagiarism). It cost billions of dollars, years, and gigawatts of electricity to get these things to repeat nonsensical statements and outright lies in a writing style that is completely illegible. At least they never miss a punctuation mark, misspell a word, or make a grammatical error while producing their mendacious gibberish. So there is that.
I am very familiar with AI, because I am flooded with requests and offers to edit AI content every day, which I refuse to do, because no amount or quality of editing can make any sense of it. From what I can see, no one in the West uses it, except to make images. The users of AI, so far, almost all come from India and Pakistan, and most of these are hustlers trying to pawn off AI-produced garbage to American and European small businesses or 3rd tier academic journals. So far, enabling some Indians and Pakistanis to hustle some money off of Europeans and Americans is about the only real-world application of AI that exists.
Chosen people gonna act like chosen people