Society · Subversion · Technology

AI is not an Equalizer

Ever since ChatGPT was first released, there is incessant chatter about human labor being increasingly replaceable. We have read that even high-skilled professions like software development will be eviscerated by an AI, presumably one that ultimately writes itself. All it took was some business guy ask ChatGPT how to write a loop that counts up from one to ten to have the collective mind of normie-dom blown. Even worse, plenty of Western politicians want to “modernize” school curricula in order to adapt them to the post-AI world. I sense much more sinister motives at work here.

Humans are a tool-making species. You can go back to the dawn of recorded history to find one example after another about how we managed to use our time and energy more effectively. Granted, in some geographic regions, such evidence may be a bit harder to come by than in others, but if you look at the most successful societies in human history, it is undeniable that there has been one jump in productivity after another. There was a time where we did not have controllable fire, rakes, oxen in farming, fire, saddles, wheels, boats, steel, etc. Each new technology led to the displacement of labor. In more recent history, computers obviously led to yet another jump in productivity. We used to have human “computers”, a secretarial job that was commonly held by women and whose disappearance is construed as evidence that tech bros are misogynists. Instead, digital computers only cut out the middle man or, more precisely, middle woman. A few decades later, pocket calculators were widely available, and not long after that, middle-class families could afford their own personal computer, which happened no later than the 1980s.

Even when I was a kid, I remember clueless teachers talk about how skills in arithmetic would be redundant by the time we are adults now that we all have pocket calculators. This was a claim the weaker students readily embraced. Yet, even when there is a calculator available, you are obviously able to wield it much more competently if you have well-developed quantitative skills. A few years later, when personal computers were really ubiquitous, it was supposedly irrelevant to learn how to spell correctly because computers could do this task better than humans anyway. If you have ever received an email from a boomer at work who managed to write a few sentences via the hunt-and-peck typing method, you have seen pretty good evidence that a spell-checker has its limit and some people do not even bother using them.

The next big productivity boost was apparently the introduction of effective Internet search, in conjunction with the explosion of the amount of readily available information online. I recall university lecturers claiming that schools and universities needed to adjust and teach different skills because anybody can just find anything online and even copy and paste texts easily. Obviously, you still needed to be able to assess whatever Google spat out, which was even true before this company tried telling us that all European kings and European inventors used to be black, and Beethoven, too. I have met quite a few women who thought they are smart if they sent me a paragraph or two of some text they found online in order to repudiate a point I made earlier. These women were so stupid that they did not even understand what they had just sent me, nor were they able to grasp the content. However, if this approach did not work on some men, they would not have done it. Obviously, today communication via social media is much more based on memes but the underlying issue remains, i.e. a lot of people think that a “meme” constitutes an irrefutable argument.

Once ChatGPT became available, some people were rejoicing as they were hoping that they could now outsource their thinking to a machine. I recall coming across claims that it was no longer possible to teach at high school or college because students could just let ChatGPT write their papers for them, invariably invalidating a college education. Granted, if you want students to merely regurgitate information then ChatGPT really helps them save a lot of time. However, every student engaging in such shortcuts will not develop key skills related to information retrieval or the assessment of the reliability of sources, let alone demonstrate that they can reason about anything they read. This is of course very welcome news for the average teacher or lecturer because these people tend to become rather uncomfortable whenever they encounter a student who is able to think on his feet.

I use AI assistants occasionally myself, but I am a lot less enthusiastic about them than the typical pundit. In fact, I am quite tired of the mainstream narrative. I do not let AI compose anything I write. In fact, it is normally quite obvious when a text was written by AI. Bizarrely enough, I frequently encounter people in a professional context who just copy and paste some AI generated text, sometimes without even having bothered reading it beforehand. There were even news stories about lawyers how used generative AI to write their letters, sending them out, and getting fined. As it turns out, referencing non-existent laws and court judgments is not a good idea.

My primary use case for AI is centered around helping me speed up some basic research tasks, and even this has limits. My preferred AI is Grok, which is pretty capable to collate material that can easily be found online. It does best with data that is already easily accessible, just scattered around. Its DeepSearch mode is quite good, listing its sources and outlining its “reasoning”. If you ask it to compile a list of the 20 games with the highest time to completion according to HLTB, it does this well. Similarly, you can extract data from other easily accessible websites. In this case, Grok simply saves you a lot of time, and may even help you get some quantitative insights that may otherwise be infeasible to get as it would be too time-consuming, at least if you cannot directly query a database. Grok can also help you unearth niche information. For instance, I recently asked it to give me a summary of the drifting mechanics in Daytona USA, and it dug up some decades-old information from GameFAQs. You can of course spend an hour or two checking various sites, forums, and Reddit yourself but Grok does this in a few minutes and probably gets at least comparable results. However, Grok is apparently unable to understand connections and, worse, exclusion criteria. In requests in which I asked it to write down a strategy for dealing with a particular problem, it sometimes lists utter nonsense. Worse, it makes up stuff, but if you afterwards ask it to tell you which information was made up, you at least get a non-evasive reply.

A concrete example where Grok saved me a bit of time was when I asked it to summarize the research on the effect of methylphenidate (MPH) when driving a car and contrasting this with “Internet sentiment”. I got a decent summary and could easily reference the used sources. The answer I got was still not satisfactory, but that is perhaps a topic for another article on the MPH lifestyle. If you want Grok to actually reason and propose hypotheses you do not get much more than IQ100 takes that tell you what is good or bad about something. You get “balanced” takes, similar to how the mainstream media is balanced in its reporting. Here Grok is revealing to me that it is working on a “fair and balanced” assessment on socialism, based on a prompt asking it to list the pros and cons of this form of government:

The result is about as “fair and balanced”, and shallow, as you can imagine. Calling it myopic would be an understatement. I am missing the part where money just falls from the sky to pay for this paradise. In the world of Grok, in the multi-page summary, there was also no reference to real-world socialism and its many failures.

This leads to why politicians want to use AI in school. Obviously, if you train an AI to spout the party line and students are so poorly trained that they cannot question anything, they are forced to accepting as truth whatever is put in front of them. In the vision of our hostile elite, generative AI is supposed to play the role mainstream TV did for boomers. This is not at all far-fetched. Just look at how poorly educated today’s youth in the West already is. Adding two double-digit numbers may pose a challenge to the median high-school graduate, provided they can even read. You only need to tell them that they are educated because they went to college, and point them towards and AI assistant. These people will confidently state whatever the AI tells them to believe.

In the end, reality will prevail. You cannot create a functioning society from a mountain of morons. An agrarian society may be able to function this way, but I even doubt that as it is a sign of utter arrogance to believe that our forebears were uneducated and stupid. For a reality check, look up letters written by soldiers in the first world war. The level of literacy on display is far above today’s standards. It is also relevant that the guys fighting in the trenches did not hail from the upper classes. These were commoners. Yet, they were able to compose structured, readable letters. They put to shame the typical graduate from supposedly good universities you interact with in an office environment. Needless to say, the median high school graduate would look even worse in this context.

The leftist vision is to deindustrialize society. It is not quite clear how they imagine they are going to keep their high living standards for themselves but presumably they think that they will cross that bridge when they get there. Replacing an actual education, which was bad enough already before ChatGPT entered the mainstream, with an AI is simply not going to work. I can imagine a corner case of a very limited world with a low bound of achievable knowledge in which its inhabitants get information directly fed into their brain, something like Neuralink for chimpanzees who live on a reservation. Human progress, however, depends on building upon existing knowledge and blazing new trails. An AI that regurgitates existing knowledge at best, and leftist talking points at worst, will only hasten the demise of Western civilization.

2 thoughts on “AI is not an Equalizer

  1. Succinct analysis, SLeaze! If I were a teacher I would disseminate this to students who are getting encouraged nowadays to take almost anything their A.I. bot spits out at face value.

  2. I agree about overrating AI. It’s one of the most overrated technologies in decades.

    People are just really stupid about AI. They fail to realize it’s just a language generator, and not exercising any real intelligence.

    If you ask ChatGPT who won the 2024 election, it tells you it was Donald Trump. If you ask it who is president today, it tells you it is Joe Biden.

    Of course, ChatGPT seems to be one of the worst AIs out there today. Just like you, Aaron, I’ve found Grok to be a better choice. I’ve also run into the same problem with superficial analysis. Also interesting that you found that article about lawyers being fined for their AI-supported rule violations, because I have tested a couple of AI with basic legal questions, and they got them horrifically wrong. Anything more advanced than “Is murder legal?” and you can count on an incorrect answer.

    Another important thing to note is that AI programs are not independent entities. They still need to be operated by people, who have to select the best query, determine the quality of the answer, and decide what parts of the answer to keep and which to discard. Something intelligent people are of course better at than less intelligent people. The same traits that give advantages in (non-physical) jobs and study programs today will give advantages even if AI gets better, because intelligent people will be able to use AI to achieve better results than less intelligent people are able to.

Leave a Reply to Karl Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.