Artificial Stupidity

Yet Another Rant About the State of Modern Technology

“The future is already here — it’s just not very evenly distributed.”

—William Gibson

Under the recommendation of my father I have recently been reading Apple in China. Speaking as the sole non-iPhone owner of my family and as someone who has not read the book in full, I have so far found it an interesting and thought-provoking read. Although the specific products themselves are not what the book is really about over and above their manufacturing concerns, it struck me how different Apple's line-up and the atmosphere around it were in the early days compared to now. The original iMac was a wonder with a housing that had required major advancements in injection moulding techniques to manufacture it at all, let alone mass produce. When the original iPod came onto the market, there was nothing else quite like it.

If that spirit of innovation is still present in Cupertino, its fervour has seemingly been dimmed by the passage of time. These days, the new iPhone is unlikely to differ much from its predecessor except for minor upgrades to hardware specs, the camera arrangement du jour, and the possible presence or absence of a bezel. Changes tend to be neutral or even negative, as in the case of the loss of the headphone jack and the uproar that followed. Furthermore this stagnation is not limited to Apple, as I recently found out to my detriment. I was issued a university-owned HP laptop earlier this year to use for my degree and at the time immediately set about wrangling Windows into submission. While I have successfully subdued most of the non-features that would otherwise have impeded me, one immovable object continues to vex me: the copilot button.

It may not be apparent to the casual observer why this was such a problem: one might argue that if you find AI so objectionable you are well within your rights to refuse to use it, but that doesn't make you entitled to deny the benefits to others. Philosophically speaking, I agree with this – in theory. My problem was not with what I had been given, but with what had been taken away; the copilot button was in the spot usually allocated to the right Ctrl button. I make heavy use of keyboard shortcuts and in particular often use the right Ctrl button to navigate browser tabs one-handed. I understood key-remapping in principle and reasoned that I could simply remap the AI button back into a Ctrl key. None of the tools and settings that Windows has available by default were able to do this. Undeterred, I installed PowerToys. All I managed to do with that was disable the key entirely. Next I tried AutoHotkey. This enabled me to temporarily brick my machine (fixable by turning it off and then on again) but achieved nothing useful. In desperation I consulted a discord server of close confidantes and was told to use a RegEdit hack which required admin privileges I did not have. Out of patience and out of options, I had no choice but to withdraw in disgrace and admit defeat. I left the key disabled.

The crux of the problem, it would initially appear, is AI. Big Tech insists on putting it in everything, various cohorts of the internet shriek about how it's going to doom us all (often in orthogonal or mutually exclusive ways1), and unfortunately it seems to have a death grip on the tech-related news cycle. Depending on who you ask, it's a cure-all, an amoral genie, or the fifth horseman of the apocalypse. My personal opinion is that it is normal technology, but normal technology is perfectly capable of creating societal upheaval. After all, the Luddites started off as artisans who wanted better labour laws and the opportunity to continue making a living. The property damage came later.

Categorising AI as a whole as good or bad is an exercise in futility because its definition is both vague and constantly shifting. Computers doing complex tasks is only AI until it's possible, then it's applied statistics and dynamic programming. For that reason if we wish to reach any sort of cogent conclusion we must narrow our scope. The specific flavours of AI that dominate the current discourse mainly fall into the category of generative AI; this includes LLMs such as ChatGPT and its ilk, along with GAN-based image generators such as MidJourney. I have tried using these in the past and each time the result was poor quality, something I had already thought of, or otherwise unfit for purpose. Time spent prompt engineering and checking for hallucinations could just as easily be spent doing the task myself.

There are multiple reasons why I have been engaging in artistic pursuits more deliberately in recent times, but one is that every time I've tried using image generators in the past when I needed a picture of something specific at short notice and couldn't find an example, the results were simply not that good, even after multiple attempts at prompt engineering. Before image generators existed, I would simply take the closest pre-existing image the internet had to offer and be done with it, but the experience of having access to something that promised to give me exactly what I wanted on demand and failed to deliver on that promise was frustrating enough to galvanise me into action. It reinforced the notion in my mind that as long as an idea is trapped inside my head, only I can give form to it. No one else is going to do it for me because nobody else can. Admittedly, image generators have improved since my last foray into AI art, but so have I.

When it comes to matters such as these, I do believe that it is reasonable to ask, 'where's the omelette?' If a proposed course of action is objectionable but works, one can argue against it on the basis of its other negative qualities. If it does not work, no further argument is necessary. As it is, there's shell all over the kitchen bench, raw yolk on the ceiling, and the omelette isn't even that tasty. On the basis of quality of output, as well as other considerations such as energy use, ushering in a post-truth zeitgeist, and making it much harder to find good anything on the internet2, I am content calling generalist LLMs and GAN-based image generators bad in terms of their net impact on society at large.

One of the great tragedies of this state of affairs is that AI is being used to great effect in everything from predicting protein structures to neutron star merger detection, but these AIs are don't loom anywhere near as large in the collective consciousness of the general public. The end result is that the definition of what counts as AI has been warped and diluted to the point of uselessness. Anecdotally specialist AIs seem more likely to be useful than generalist ones, but arguably LLM chatbots are also specialists – their specialisation just so happens to be generating specious nonsense.

AI may be the biggest buzzword of the now, but AI is not inherently the be-all and end-all of modern invention. So what else is there to get the average tech nerd excited? Blockchain was supposed to revolutionise currency and ownership, but this clearly didn't happen. These days it's much more associated with swindlers and procedurally generated3 JPEGs than with innovation. Quantum computing looks much more promising, but only by virtue of lack of competition. Assuming it ever reaches commercial viability, which is by no means a foregone conclusion, an ordinary person is still unlikely to have any practical use for one. Of course, sellers of everything from healing crystals to powdered dish soap will insist on calling their products 'quantum' regardless of relevance or lack thereof.

In that light, the mad push by large corporations like Microsoft and Apple to force their customers to use AI is not terribly surprising. A perceived synonymy between AI and 'new tech' is being artificially enforced partly because there's genuinely little else in the pipeline and partly because of business requirements. Investors require a constant flow of new products first to be invented, and then to be popular. A few customers are leery because any product you have to be forced to use can't possibly be any good, but anecdotally I've found the general public to be incredibly blasé about the increasing ubiquity of LLMs and image generators. Most ordinary people hardly seem to notice, much less care, that their data is being harvested and rivers are being drained in the service of flooding Facebook with AI-generated images of impoverished terminally ill children, Into the Jesus-verse, and busty women in military gear. To be clear, right now is one of the better times to be alive when taken in the context of the whole of human history, but that does not ease the pain of being constantly confronted with profoundly stupid problems.

Even products that give the impression of novelty mainly take the form of previously untried combinations of established concepts, or second attempts at high concepts that previously failed to take off. A popular subgenre of quasi-inventions of this kind is wearables: take an otherwise unremarkable gadget, turn it into an accessory, staple on some AI, and there you go. To take an example, smart glasses have existed conceptually for decades, but only recently were they turned into a credible product. While most wearables erode the privacy of the owner, quietly extracting telemetry and sending it off to who knows where,4 Meta's Ray Ban smart glasses are unusual in how directly they threaten the privacy of those around the wearer. While they don't come with the necessary software pre-installed, rigging a pair up to identify strangers in real time is trivial.5 These sorts of problems were foreseen with Google Glass over a decade earlier, but only now are they coming to full fruition.

We now live in a world where expectation of any sort of privacy at all is no longer compatible with existing in public. No matter how studiously you avoid leaving any trace of your life online yourself, it will never be practical to completely prevent others from doing so. Even as innovation in consumer tech has stagnated in favour of an ill-conceived rush to inject AI into otherwise perfectly functional products, laws and social norms have failed to catch up.

As nice as it would be to tie this off with a neat little bow, a final overarching thesis about what the world is really like and what concerned citizens should do next, I don't think there is one. I haven't given up on the ideal of progress, that the circumstances under which we suffer the human condition should gradually improve over time, but at the same time it would be naive to ignore the fact that so many of the 'improvements' being sold to us in our everyday lives are being foisted on us for ulterior motives. It is incredibly rare these days that I see a new product or feature that I want to spend money on. Indeed I am so disillusioned with smart devices that I am willing to pay extra for a dumb one. All too often that touted new 'feature' is inevitably a half-baked AI that only serves as an enabler for the greed, malice and stupidity of human beings. I fear those far more than any hypothetical singularity. The world has changed, in many ways even for the better. Humanity has not.

1I had to permanently block LessWrong on my phone with a productivity app for the sake of my own wellbeing because the constant AI death cult nonsense pissed me off that badly.

2If my request is something very niche or very mainstream (how do I dismantle a Kenwood KX-6206; how often should I wash my drink bottle7) search engines still work. If my request is something in between (are celery leaves edible8) I often find I have to exclude results after 2020 or else sift through countless strange, near-identical sites telling me things I already know in the most vague yet verbose ways possible. I don't know for certain that all of these weird new sites are using LLMs to generate their copy and I don't believe in prose style-based LLM forensics, but it is suggestive that this only became a problem after LLMs went mainstream and automated tools for building similar sites using generative AI are not difficult to find.

3Procedural generation and generative AI are not the same thing. While the dividing line is fuzzy, the main distinction is that procedural generation does not have any notion of training or, by extension, of training data.

4Remember, the cloud is other people's computers.

5For an extensive rundown on this 404 Media did an excellent article, though it is paywalled if you weren't subscribed at the time of publication.

6With great difficulty.

7Every day.

8Yes.

Table of Contents