Discover more from DARK FUTURA
I haven’t done a tech update in a while, so I figured now’s a good time. Developments in AI technology have been blazing along at breakneck speeds. It almost feels like we’re hitting Singularity-level strides, at times.
I wanted to round up some of the most ‘exciting’ or surprising developments, then hypothesize where things are headed.
First let’s verse ourselves with the pertinent terminology. AR, or Augmented Reality, is similar to MR, Mixed Reality, and involves blending of virtual objects with your real surroundings—as opposed to VR (Virtual Reality) which is set completely in a virtual world inside your headset. There are various corporate catchphrase spinoffs like Extended Reality, etc., but it suffices to stick to the basics. For the sake of simplicity, we won’t split hairs and will just refer to the blanket term of AR.
Augmented Reality
Firstly, in the field of augmented reality, the advance of processing power and new sensors has allowed real-time tracking of the surroundings to become far more reliable. This has resulted in new upcoming capabilities, already being utilized by latest VR headsets like Quest 3.
Such capabilities include the banal, like watching films or playing videogames on large AR ‘screens’ which can be placed anywhere in your own home/surroundings:
Video games which can be played in a real-time environment of your own home:
Or more interactive types of videogames which can outright utilize the unique objects and obstacles of your environment. Here Minecraft can be played in your own backyard:
Don’t worry, things get far more interesting…but let’s first dispense with the more ordinary and recreational. These new AR environments are being showcased as allowing you to do household chores while watching shows and digital content with a floating screen you can lock in place:
But where this technology begins to get more involved is the combination of AR and Smart Glasses tech with the idea of a ‘Smart Home’ and the ‘Internet of Things’, which I wrote about several times. This allows users to interact with their own networked home appliances—at the basic level—giving them the ability to turn them on or off via AR overlays:
But a quick disclaimer, some will recall me lambasting these very developments in articles like this one.
My reporting on them here does not necessarily equate to support of these developments. At best, I remain agnostic with a lean toward the suspicious and sometimes outright hostile—though I’m not necessarily a total Luddite. My stance is simply that these developments are unstoppable, so it’s best to at minimum understand them, and perhaps even learn to exploit or live with them in certain circumstances, always remaining vigilant as to how they may be used to control, harm, or enslave us.
That being said, in the earliest stages they invariably appear ‘innocuous’ enough to entertain some limited excitement, or perhaps curiosity at the progression—after all, we are entering an unknown frontier, one which many of us grew up imagining, in the heady throes of our teenage infatuations with Science Fiction, and the like.
So let’s move on to some of the more captivating usages of this technology.
First, as learning aid, the overlays can already be used for such things as learning instruments, as follows:
This will naturally blossom into endless applications where AI assistants will be able to ‘guide’ you in real time in doing any task as banal as walking/driving directions, to a surgeon performing complex operations.
The chief limitation continues to be the bulky and uncomfortable headsets one is forced to wear. However, they continue to get better and smaller with each generational iteration. The latest Quest 3 are already much smaller and less invasive than previous models, and as you saw in one of the earlier posted videos, there are already Smart Glasses beginning to hit the market, like the new Ray Ban and Meta partnership Smart Glasses—however these don’t yet utilize any AR/VR but some light recording and Chatbot integration for now, making them rather worthless.
This direction is much harder because a small pair of glasses lacks the heavy-duty processing capabilities, the room for the various sensors and chips necessary to do what a pair of proper VR goggles can. However with technological advancement more can be fit into less space, as well as the possibility of offloading all the processing into an outside ‘cloud’ unit which streams the information back to your Smart Glasses, freeing them from the need of bulky processors of their own.
One way that will eventually take flight is likely that ‘smart sensors’ will become ubiquitous not only in our homes, if you so choose, but more importantly out in the real world. Streets, stores, and public places can have sensors which for instance can read queues from your small glasses in order to calculate their orientation in the same way that cameras read the ‘tracking balls’ in filmmaking:
These ubiquitous sensors can then do all the orientation processing that the bulky internal hardware now does on VR headsets, then simply beam the signal wirelessly to your ‘Smart Glasses’ as you walk around.
Of course eventually, there will be no need for even that, as eye implants and brain implants like Musk’s upcoming Neuralink will potentially be able to feed the overlays directly into our retinas and/or brains.
But as of now, would I personally wear a set of Smart Glasses? Not so sure, given that beaming energetic signals from a ‘cloud’ source nearby would likely mean the high emission of RF/EMR waves straight into your brain, no different than holding a cellphone close to ear for extended periods. But this can also be overcome with a wired/cable attachment rather than wireless transmission, at least for home use.
Also, as a further caveat. I’m bringing up the Quest 3 a lot because it happens to be the latest and best representative of these newest technologies. Do I recommend you go buy one?
….not quite.
First of all, Quest VR is made by Meta—one of the most despicable companies in the world, run by a barely human globalist technocratic psychopath of the worst persuasion. That’s the ideological angle.
The practical angle is that, last I heard, Quest requires you to have an actual, legit “Facebook” account in order to log into and use their product. I haven’t tried it myself—I own a different brand of VR headsets—nor do I use FB, but I’ve heard that it has to be a verifiable account linked to your real name and you can’t be bypassed or spoofed with some sockpuppet knock-off with a burner address. Maybe there’s ways to get around that, but it’s certainly a huge turn off from using any product or hardware made by, or associated with, Zuckerberg’s Meta.
Thus, none of this is an endorsement of any particular product but rather an exploration of where the technology is rapidly heading.
We on the same page? Good, let’s move on!
Now things get even more interesting, or…dystopianly perverse.
Apple announced the iPhone 15 Pro, which will capture 3D 'spatial videos' for the Apple Vision Pro mixed-reality headset, allowing users to relive memories in a new way.
There are more and more pushes for technology that can capture the spatial information around you, which the aforementioned Quest amongst others can already do. But this can be paired and transformed into some ‘interesting’ directions.
For instance, one user has scanned his home and transformed it into a digital architectural blueprint, which can be used in a variety of ways, from turning your own house into a gaming environment to other bizarre things we’ll get to shortly.
Some believe that these technologies can now be used to capture one’s entire life, or at least portions of it in fully spatial detail. This allows the creation of ghost memories, which can even be populated into your actual space via AR.
This is just a crude demonstration with current apps. Interactive memories:
I'm convinced the killer use case for 3d reconstruction tech is memory capture my parents retired earlier this year and i have immortalized their home forever more photo scanning is legit the most future proof medium we have access to today scan all the spaces/places/things
When your pet passes away, you can continue reliving them through your home after capturing spatial videos of them:
Are you shuddering? Repulsed yet? Or will you continue this journey down the dark, uncertain path toward technological singularity and transfiguration with me?
Recall the scenes in Blade Runner 2049, with the virtual AI girlfriend, Joi.
Or how about the scenes in Minority Report where Tom Cruise’s character watches ‘hologram memories’.
But these are mostly mundane applications.
The darker side of the tech comes when you apply burgeoning AI capabilities to it.
Those who’ve followed the developments will know that generative AI has gotten increasingly proficient at mimicking humans, whether it’s a famous artist’s style or even an author’s writing trademarks. The same has been going on with human likenesses and voices.
AI can now synthesize any human voice, recreating it almost perfectly, depending how much voice data from the target person you feed it.
When you combine these things, you will soon be able to create undead ‘avatars’ of your loved ones who look and sound just like your grandpaw, and can populate your home via AR integration—not just as latent, static ‘memories’, but as actual interactive daemons.
This is already underway.
The above NYPost article states:
Dr. Pratik Desai, a Silicon Valley computer scientist who has founded multiple Artificial Intelligence platforms, boldly predicts that a human being’s “consciousness” could be uploaded onto digital devices by the end of the year.
“Start regularly recording your parents, elders and loved ones,” he urged Friday in a Twitter thread that’s since racked up more 5.7 million views and tens of thousands of responses.
“With enough transcript data, new voice synthesis and video models, there is a 100% chance that they will live with you forever after leaving physical body,” Desai continued. “This should be even possible by end of the year.”
New technologies like the following video, are already allowing people to instantly recreate 3D objects, duplicating, manipulating, and altering them in real time in AR space:
You’ll soon be able to “scan” a person and using generative ChatGPT-like technology, recreate an avatar of them to live indefinitely within your chosen AR environment, just like Joi from BR2049—except it can be the loved one(s) of your choice.
Some companies already offer basic forms of this:
Would most of us ever do that? Probably not—I’m sure it elicits the same rank horror and repulsion amongst many if not most people. But it will be happening soon.
So what about these technologies to mimic voices and likenesses? Here’s one recent example:
As we speak, there are already programs developed which can overlay AI generated mouths onto real human faces in real time in order to change the voice and language of the speaker while realistically conforming their own mouths to match. This is initially being used for translation purposes, allowing you to translate speakers in such a way that it looks like they’re actually speaking in the language of choice:
In the above video, he warns that this technology is becoming dangerous. And he’s right in that it’s already been used to deviously ends on the geopolitical spectrum. For instance, just last week during some of the escalating political crises in Ukraine, several such deepfake videos startled people and threatened to set off unpleasant chains of events.
One was of Ukrainian general Zaluzhny who calls on people to go overthrow Zelensky:
In fact, after people widespread outrage, a second video was even released where the DeepFaked Zaluzhny doubles down:
The problem is, people have to understand that this isn’t a traditional ‘DeepFake’ where you get a secondary actor and painstakingly replace his face with an algorithm. These are actual, real, previous videos of Zaluzhny whereupon AI has manipulated the mouth to match a new set of spoken words. This is in many ways more dangerous as it’s far more realistic than traditional DeepFake, as everything about the personage himself is real, with only subtle manipulations of the mouth being done in real time.
It’s only a matter of time before this technology becomes the primary driver of global flashpoints and falseflags—and in fact may have already done, under our noses.
One of the few useful things the Biden administration appears to have done is issuing a new executive order just two weeks ago that seeks to curtail some of these coming dangers of AI:
Apart from supposedly setting safety standards for AI developers to, for instance, prevent things like AI getting control of biological weapons developments, it also takes action against the type of DeepFakes discussed above by establishing certain ‘standards’, which include authentication and watermarking of AI-generated content:
Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
No idea how they plan to do that and whether it’s just a figleaf to assume more control, as is the usual governmental motive—we’ll have to wait and see.
Most who’ve tepidly followed developments know that generative AI has been advancing very rapidly this year. We went from early versions of Dalle and MidJourney which can create some passing novelties, to new highly advanced multimodal updates that can make minute changes, identify and describe uploaded photos, etc.
On that note, the latest Chatbots have “vision”, which opens up a whole new range of possibilities. With Google’s ‘Bard’ you can upload entire documents and have it summarize them or discuss the finer points with you. With Microsoft’s ‘Bing’, you can upload photos and have the bot describe or even explain them to you.
For instance, one new favorite pastime has become uploading memes and asking the AI to explain them, particularly ones that have charts or comparative tables of some sort.
But the developments in this field have been accelerating. New systems and apps are already taking ‘vision’ to a realtime level, which includes not just photos but videos.
One already available example of a GPT bot that can and describe things through your webcam:
Another experimental AI has been used to provide realtime commentary on a soccer match with a synthetic AI voice:
And the new generation of Chatbots overcomes many of the previous problems, such as the notorious latency, whereby it takes several seconds or more for them to generate a response to an asked query.
In this example, an AI telemarketer is able to call a human and have a fully interactive conversation with them:
There are still many kinks to be worked out, of course. But these are all being finetuned, and there are samples out there of new versions already even more advanced than the above.
Last week a viral video showed a man acing an aerospace engineering remote job interview at Lockheed Martin, with no actual knowledge on the topic whatsoever, all while using ChatGPT on a second monitor to instantaneously answer all the questions the interviewer was asking him. Many were outraged at how ‘dangerous’ a precedent this sets, as it could allow people to lie their way into jobs that have a lot of people’s lives on the line—like perhaps that of air traffic controller, etc.
With the new ‘vision’ modes, these AI bots are getting better than ever at even identifying diseases. You can upload photos of your wart and have it diagnose you, and many people have found them to be more accurate than human doctors, ever possessed of their corruptible bias and plagued by informational blind spots of all types.
But the final frontier will be combining the new, most advanced GPTs with actual physical robotic avatars. The advanced reasoning capabilities can now inform the physical machine to act and create the long-awaited, approaching-consciousness robots.
One example of this was a pairing of a ChatGPT variant with one of the infamous Boston Dynamics-style robot dogs, but that was just a quick and dirty test, relatively unimpressive.
More impressive will be the upcoming professional consumer level products like OpenAI’s ‘Eve’ which is said to combine their flagship ChatGPT with an actual autonomous robot, seen here demonstrating its tactile abilities:
As many know, Boston Dynamics is leading the field in highly agile robots like their Atlas series:
Now imagine pairing the above with the next generation of near-conscious AI capabilities. In only a few short years they will likely transform the world around us, regulation permitting. Some believe they have a few key underlying shortcomings which will prevent such machines from ever becoming widespread, like for instance short battery life. But this will easily be solved in the future as these technologies become ubiquitous. Imagine a high-tech city that has the equivalent of ‘charging stations’ placed all around. Except instead of ‘charging’, they will merely dispense ‘rentable’ battery packs that will be instantly switched out with millions of such robots/machines on the fly. Every corner can have them and if your robot sidekick is ‘low’ on juice, he’ll just quickly exchange his battery pack with a fresh one in a street corner dispenser and continue on his way, while perhaps automatically incurring your account for some nominal charging fee.
But the biggest new threat is the progressive clampdown and hardwiring of Leftist censorship, ideology, and socio-cultural engineering into the next generation of AI products. Christopher Rufo explained Biden’s new initiative:
President Biden has signed an executive order that will require AI companies to "address algorithmic discrimination" and "ensure that AI advances equity." They want to embed the principles of CRT and DEI into every aspect of AI.
Did you catch that? Hardwire DEI (Diversity, Equity, and Inclusion) and CRT principles into AI to make it more, well, “inclusive.” Particularly note the line about addressing “algorithmic discrimination” which basically means programming AI to mimic the present tyrannical hall-monitor managerialism being used to suffocate the Western world.
For avid users of GPT programs, you’ll note this is already becoming a problem, as the Chatbots get extremely tenacious in pushing certain narratives and making sure you don’t commit WrongThink on any inconvenient interpretations of historical events.
Elon Musk’s new attempt to supposedly push back has yielded his own variation on Chatbot tech named Grok, which is said to be an irreverently ‘based’ and unapologetically politically incorrect model. But I haven’t had the chance to test it out myself yet to see how it responds to truly tricky or controversial scenarios.
It mostly doesn’t matter anyway because any such heterodox models will likely remain a novelty while the truly big monopolies and megacorps will work with captured governments to make certain that their oppressive models will be the ubiquitous gold standard to be mandated in our societies with no second options. These will become embedded into our lives, blurring the lines between public ‘utility’ and private corporation as social media giants have already so expertly done.
To clarify, just like with medicine or broadcasting, there will be government regulators akin to the FCC, FDA, etc., which will regulate the ‘standard’ code frameworks and algorithms allowed to be used on a wide scale basis. Those frameworks will be the ones operated by the big tech giants, and will have mandatory ‘ethic/equity/inclusion’ codebases programmed into them. These ‘standard’ AIs will be the only ones allowed to proliferate and become the de facto monopolies that will run all of our “internet of things,” smart homes, etc., just like Microsoft’s Windows, etc.
Such AIs will make sure to always subtly steer public discourse, the allowable Overton windows, so that the narrative flow never exceeds comfortable bounds. The AR/MR/ER developments will work in tandem toward creating realtime ‘equity’ by deleting or replacing unsightly or inconvenient parts of reality on the fly, wiping them from sight.
Of course, that’s the near future envisioned by them—but there’s still a chance it will fail, at least in all but the most compromised countries of the collective ‘West.’
For now, enjoy the unintrusive predecessors of our future overlords while you still have the chance:
If you enjoyed the read, I would greatly appreciate if you subscribed to a monthly/yearly pledge to support my work, so that I may continue providing you with detailed, incisive reports like this one.
Alternatively, you can tip here: Tip Jar
Bottom line, all of this surveillance technology is for the use of the United States government and corporate America, which people will voluntarily put on, why exactly? So the machine can tell them where a particular box of something is at the grocery store? So they don’t have to ask a clerk? There won’t be any clerks of course. There will be weird little robots. It will become a status symbol to not have any of this garbage in your house. To have entirely mechanical stuff like a toaster that isn’t connected to the Internet will be something only the rich people will be permitted to have.
Thank you for this article. Two things, a luddite is merely someone who advocates for appropriate technology,there is a difference In the way its used as a snarl word to denigrate anyone who does not believe in rapid progress at all costs.
Also I do not buy into the idea that the battery problem is insignificant. How long those Boston robots move for,a few minutes? Lithium cannot power the world as it is,add in trillions ofmrobots...its simply impossible. Most of all,it's too expensive,from a monetary and energy perspective.
I'd highly recommend Tom Murphys "do the math" website, he sets out in exquisite detail the amount of energy avail5to the human race,its quite eye opening