A friend recently sent me the video below, in which a diverse Google manager, Mo Gawdat, talks about how “intelligent” AI is, making some truly outlandish claims. By the way, Mo is shorthand for “Mohammad”. (I have met a surprising number of Mo’s in tech.) Anyway, Mohammad Gawat tells us that he current version of ChatGPT has an IQ of 155 and that the next version will be many times more intelligent. There is so much AI bullshit floating around that it seems relevant to put some of it in perspective.
The immediate concern is that companies like Google have a vested interest in promoting AI. They have done that for years, and they are quite brazen about it. Quite infamous is their Google Duplex demo from six years ago, which was supposedly able to make phone bookings on your behalf. This demo was most definitely not faked at all. I really do not know why it was not turned into a general-purpose product, given how far ahead of the curve Google was. Alas, for some strange reason they are not talking a lot about this product much anymore. I am reading that you can still use it if you buy one of their Pixel phones but not many people seem to do that.
There is a pattern in AI that goes back all the way to the 1960s: hucksters in academia and business making big promises in order to get a lot of money from investors and government. They ride high for a few years, until they are supposed to delivery a demo. With a further serving of bullshit, they often managed to get additional funding but at some point the well runs dry. Afterwards, nobody wants to touch AI with a ten-foot pole for years. This period is called an “AI winter” and it happens every few years. It lasts until there is yet another revolutionary idea, and a lot of investor money.
About a decade ago we were supposed to have self-driving cars soon. All repeatedly took were a few dozen more millions in additional funding. (Oh, and look at all those competitors! Surely, Shlomo, you cannot afford to fall back in this race so please give us some more money!) Countless billions of investor money have been burnt and in the end, full autonomous driving is still a pipe dream. It may not have sounded like that in 2015 but if you still believe that a major breakthrough is right around the corner, then maybe look for people who have a bridge to sell you. Even Elon Musk, the biggest bullshitter in the industry, has toned down his rhetoric considerably.
There is nothing really intelligent about artificial intelligence. In the end, you are dealing with large scale optimization over an enormous amount of data. Yet, the real world is too complex to make AI viable. Constrained environments are a different topic, and this is precisely why you cannot extrapolate from an AI beating the world champion in Go to AI systems being able to do everything. Besides, with game-playing AIs I wonder how much of their success is simply due to enormous amounts of data. Go is too complex a game to fully solve, but if you have access to untold numbers of games in which players make very good to optimal moves, your AI can surely quite easily pick up the moves with the highest statistical likelihood of success. In fact, before the age of big data, this was how early chess-playing AIs were easily defeated, i.e. human opponents made moves that were designed to confuse the AI and because there were no reference data available, the computer opponent struggled and was at a disadvantage.
Speaking of ChatGPT, a lot of normies were very impressed by it, and probably did not quite realize that there are very obvious limitation to the technology. I am not even referring to the woke filters that are designed to keep your queries (“prompts”) within the realm of the kosher. No anti-semitic Auschwitz arithmetic for you, goy! Yet, the core of ChatGPT is text completion. It cannot reason at all. Sure, if there are patterns in its database it can refer to, it will reproduce them. However, you will not get novel insights with it, and if you think that ChatGPT reproducing the content of some Buzzfeed listicle is a sign of human intelligence then maybe you should expect more from humanity.
After the release of ChatGPT, I recall coming across a video of a professor of philosophy who spoke about the problem that this tool can write term papers for students. The example given was straightforward, i.e. present the position of some philosophical school of thought. It was quite formulaic. Probably, you could find the content online on the freely accessible Stanford Encyclopedia of Philosophy. The only novelty was that lazy students don’t need to copy/paste and rewrite. The latter step is indeed a challenge for some. Instead, they can ask ChatGPT to rewrite the text. Obviously, the underlying problem is that this philosophy professor should perhaps not ask their students to merely regurgitate information, which is rather unworthy of a university education anyway.
ChatGPT is notoriously bad at mathematics and logic. It can, however, endlessly produce word salad for you. It is not clear how the fundamental limitations of large language models will be able to overcome. For now, ChatGPT will surely do a fine job with simple tasks like summarizing press releases but I would not expect it to produce, for instance, a novel you would like to read. This is difficult enough for humans already. All of this is a pipe dream at this point, though, as we are not even at a point where an AI can play the 20-questions game.
Coding is another interesting aspect. Unlike what outsiders may imagine, coding does not consist of copying and pasting snippets from StackOverflow, a forum for programmers, or reproducing tutorials. Instead, the main challenge is modelling some kind of process with code and data. This will not go away. You quickly enter a realm that has a level of complexity far beyond what you see in tutorials. There may not even be much reference code out there because, unlike what coding NPCs think, there is a world beyond JavaScript and web development. I can see some of the tedium of programming in bloated mainstream technologies go away via coding assistants, but right now I cannot perceive a path towards machines writing complex programs.
A big problem in tech is referred to as “garbage in/garbage out”, i.e. if your data is poor, you cannot expect good results from your algorithms. ChatGPT and similar tools have been trained on texts available online. However, people have been flooding the web with AI-generated texts so the noise is completely drowning out the signal. Surely you remember when SEO marketers took over the Internet. These people are still there but thanks to ChatGPT and similar tools, they produce hundreds of times more bad content than they used to. Probably you can mathematically model text quality on the Internet over time. After a peak in the late 1990s, there was a steady but slow decline until 2007 (iPhone!) at which point the decline picked up a lot of speed. Probably, a decent analysis of online text would confirm that online writing got even worse from November 2022 onwards, which was the first public release of ChatGPT. How will you train your algorithm, in this case: large language model, if your input data is getting worse and worse?
Lastly, I have made the observation that low-IQ normies are a lot more likely to use ChatGPT. Bizarrely, the people least positioned to grasp AI are its biggest cheerleaders. I see this at work daily, when some ditzes and diversity hires suddenly send emails with proper spelling and formatting. Yet, the content is pure waffle and sometimes completely misses the point. Quite frankly, I think that the dumber someone is, the more likely they are to not only fall for the AI marketing drivel in industry, they are also much more likely to uncritically use AI tools. Yet, by doing so, they draw attention to their lack of ability. Uncritically using AI is almost a negative status signifier at this point. Amazingly, there is also AI snobbery, i.e. people who incompetently use AI tools overestimate its usefulness but at the same time think they are superior to people who do not want to bother with it, or only engage with AI to a limited extent. Yet, you are not at the “bleeding edge” of anything if your email or document is full of “hallucinations”, i.e. AI-produced nonsense, regardless of how professional your text may look to someone who does not read it.
The next AI winter is coming. As it turns out, we cannot make cars drive autonomously and we also cannot make computers produce intelligent writing. For the last few years, we have seen companies burn many billions on AI. Yet, there does not seem to be any payoff. Not even the premier company in this industry, OpenAI, can make it work. According to various analyses, they may lose five billion dollars this year. Not only are their operating costs enormous, consumer interest is also waning. AI is a lot less of a threat than companies looking for many millions in investor funding may tell you.
What do you think of the current state of automatic translation. I sadly do not pursue Computer Science and I am too late now to do it. I have been relying on DeepL and google translate to learn Russian, with moderation and sceptism of course.
One thing I have noticed is that AI is not good at translating spoken English into spoken Russian, because spoken Russian tends to omit quite many sentential elements. The result is just a word-by-word and grammatically correct sentence.
I do not nearly know enough languages to have an informed opinion, but it seems that translating a less nuanced language into a more nuanced one from the same language family works reasonably well, but only if there is a lot of source material available. For instance, I have the impression that translations from English to German are often quite serviceable, but that is not necessarily always the case the other way around.
Lol at the “ Mo’s” 😆
The Indians and Chinese are taking over the lower level jobs in finance here in Aus. That’s why I got out, plus they are decimating the trucking industry, eg 3 Indian dudes will drive a big rig all the way across Australia and swap shifts driving and sleeping ( and probably shitting and pissing) inside the truck for endless driving for the lowest pay Vs 1 Aussie trucker guy. So guess who is getting all the work now. Aussie truckers really pissed off about this.
Plus the removals industry,
Finance is a split between Chinese and Indian.
Somehow they have taken over the 7-11 petrol station chain also.
And the usual Uber – everything of course.
Oh yeah and the real question re AI is what is the year it becomes “ self – aware “ and decides we are unnecessary to its supreme existence.
Then decides to either a) launch the nukes or
b) use us as human batteries to fuel itself.
For real . Something akin to this is coming, 😱
Sci Fi tends to become reality.
“ I have a bad feeling about this drop” to quote the character Pvt Frost from the movie Aliens.
Watch this space
While I agree with Aaron that AI is overhyped and that we won’t see big improvements in the near future, I think LLMs can be useful at work. I’m more convinced by LLMs than most other techs which got hyped in the past.
“ I have made the observation that low-IQ normies are a lot more likely to use ChatGPT (…) see this at work daily, when some ditzes and diversity hires suddenly send emails with proper spelling and formatting. Yet, the content is pure waffle and sometimes completely misses the point“
This is true but the ChatGPT generated text is better than if these normies would have written the text all by themselves. it also takes them less time.
I also use ChatGPT for work myself. I give ChatGPT bullets, ChatGPT writes the text, I then revise the text and then I use ChatGPT for a spell check. I find this more efficient than if I would have written the text all by myself.
I also use ChatGPT as a substitute for Google search.
ChatGPT should be a big productivity boost for companies. But I don’t think this boost will materialize as companies will just find ways to create more BS jobs and tasks.
I know it’s a cliche that some people tell others to “touch grass” a little but yeah, by the time some AI can realistically operate a Grader to level the earthwork at the exact point the surveyor marked, probably even our sons will be dead. And heavy construction machinery is the closest we have to “evil robots who can cause destruction”.
Worrying about Terminator scenarios is for low IQ people who additionally, well, don’t touch grass enough, like the typical gringo who thinks their country is the center of the universe and has watched too many Hollywood movies.
I don’t know if you remember it, but in the Gundam and Patlabor universes, it is precisely heavy construction machinery that is turned into “big bad evil robots.”