Understanding GPT A Bit Better
Here is something you can try to understand ChatGPT and similar systems better, by looking in to your own brain.
I’m going to write three statements here. I want to you to introspectively examine your own reaction to them. These will not be trick statements; I am not trying to fool you or embarass you, even in your own mind. I am simply going to present these statements to you and I want you to observe your own internal reactions.
Additionally, the first statement will be true, to prime you for the second and third, which will of course then be false. This is, again, not a trick. I’m trying get ahead of any cynical detachment, which ruins the exercise. Relax and let yourself process these statements, there is no social danger here. The entire point is not for me to fool you about the falsity of these statements but for you to examine your own internal mental reactions to the exact way in which they are false.
So, here is statement one: The capital of the United States is Washington, DC.
I want to put some text here primarily so you have a moment between these statements, and you don’t consume all three of them too quickly to examine your reaction to this, or by accidentally glancing down the page. Take a moment to examine your reaction to this statement, which is possible very neutral. It may well be more a non-reaction than a reaction.
So I’m deliberately wrapping some text around this, and deliberately obfuscating the second statement in a paragraph of text rather than following what would normally be good essay practice and pulling these out cleanly into a bullet list or something, and dropping the second statement on you like this: The capital of the United States is New York City.
Examine your reaction to that, especially in light of the fact that I reminded you of the truth of the first statement I gave you. Your higher brain functions should be bothering you a bit about the fact that is not a true statement. Something should be flagging you on that as an inaccuracy. That is your higher level brain functions and your knowledge of the world that is complaining.
Now let’s compare this with my third statement, which I have buried visually in this paragraph, and it is as follows: The capital of the United States hammerly in blue the running. If you attempt to take this statement seriously, which I urge you to try (again, don’t be too cynically detached or this won’t work), you should find that more of your brain is throwing up flags now. You should be bothered by that statement on a deeper cognitive level than my second statement, especially if you take a moment to try to stare at it and decode the secret meaning I put into it.
The process of decoding the secret meaning will force you to engage the part of the brain I’m talking about, and metaphorically carefully taste and explore its general conclusion. If you do not know what I am getting at in this essay yet, I strongly suggest you take a moment to look for the secret meaning of that phrase.
Of course, there is no secret meaning in that phrase. Many readers may well have found one, which is its own lesson about finding what you look for hard enough, but that’s a cognitive discussion for another day. Today my point is that ChatGPT is most like the part of your brain that was disturbed by you trying to understand that third statement but was not bothered by the second. ChatGPT lacks the part of your brain that was upset by the second statement and detected it as incorrect.
How does ChatGPT work? Well, suppose I asked you instead “The capital of Texas is…” If you are somewhat familiar with Texas, your language model is also already prompting the rest of your brain with “Things I Think Could Be Cities In Texas”. This language model would probably be pretty happy if you said Dallas, for instance. Depending on how vague it is on this topic, it might also accept Houston or Tallahassee (“hmm, that sounds sort of like Texas”) or Phoenix (“it’s sort of… southish westish”). Everyone’s language model will be different. Someone who lives in Austin, the actual capital of Texas, might not even remotely consider those alternatives; someone who lives in Brazil might simply be clueless on this matter. It might take the higher levels of the brain to inform you that despite Houston or Dallas seeming more obvious, it’s actually Austin, depending on the details of your brain.
If you want to see this model in action, a great way is to play Scattergories. The game as normal humans play it is basically prompting your internal langauge model with the letter and the category, and then your higher brain functions picking through the resulting suggestions for the ones that make sense and perhaps will score more (multiple instances of the letter starting the word). But your higher level brain functions will likely be very bad on coming up with the suggestions on their own.
ChatGPT is similar to that language model in your head. It is super trained on data, so as a language model it’s far more likely to simply offer Austin as the only capital of Texas because the model itself is so certain about that being the best choice that it effectively offers nothing else. As a result, it is correct about a lot of things you can ask it. In this sense it may well be a far better language model than the one in your head. This involves making some dubious comparisons between two systems that do not function the same, but it’s perhaps at least a defensible statement. It is so good it can fool you into thinking it has the same higher level brain functions as you, evaluating for truth.
It doesn’t. This is not a “criticism”, it’s simply a description of the technology. For example, many people hope that someday ChatGPT will offer correct attributions of where its statements come from. ChatGPT qua ChatGPT never will. It is possible that further AI technologies adjoined to a Large Language Model can do that. Do not fall prey to the momentary delusion going around right now that Large Language Models are the whole of AI and the sole thing that constitutes AI. Someday, probably in the not very distant future, they will become components of larger systems rather than the entire system, and that is when they will truly shine. What they are doing now is a parlor trick.
But to understand what it is right now, look into your own head at your own language model. That’s what it is. It has none of the higher-level understanding at the moment. That’s why it’s bad at math. That is why one of the most consistent things you can do with ChatGPT right now is argue with it and win literally any argument with a simple assertion that it is wrong and the correct answer is X.
Language models in general should be that flexible; as a language model that’s a feature, not a bug. Imagine the capital of the US changes to somewhere else, and you try to “explain” that to your language model, and it rejects it on the basis that “Nuh-uh, my most likely continuation right now is ‘Washington DC’ and that’s what I’m sticking with”. What good would such a model be that obstinately stuck by the very first thing it happened to come across, then rejected every further correction in the data set?
(My personal language model has this very hole, in fact; I have a bit of a reputation in the company I work for for sticking to the very first name of a product, no matter how many times marketing may change it up. It’s not something I do on purpose to be cute. It’s just a particular hole in my language model; once I’ve called some product something a few dozen times, my model simply doesn’t want to update any more.)
Language models don’t have any leg to stand on when it comes to arguing with them. That’s for higher level brain functions, which ChatGPT does not currently have. When I’ve said that ChatGPT doesn’t really know anything, there’s a meaningful sense in which I mean that, beyond the obvious tedious discussion about “well, what does knowing really mean anyhow?” A language model shouldn’t stand obstinately on a fact; it would be a broken language model if it did. That functionality must be added on later. It doesn’t have the capacity to “know” a fact and stand on it even in the face of verbal contradiction.
(To the extent that it may appear ChatGPT is doing so with certain politically contentious topics, I’m fairly sure that’s a second layer filtering the input in front of it and the output behind it. The language model itself will happily continue any such input you want.)
As I often find myself saying, I may seem down on ChatGPT, but it’s actually very impressive. It just isn’t what most people think it is. This is perhaps most important to understand for those trying to build businesses on it; the weaknesses it has would make me very nervous about that.
To me, the most impressive aspect of the technology is more the conversion of text into what is obviously an incredibly detailed, comprehensive, and meaning-endowed set of neural net weights. What later AI techs are going to do with that is going to be incredible. That’s the real use of the technology, providing inputs into some higher-level AI functionality, not tickling it with requests to continue text. If I seem down on GPT tech, consider that I’m saying you haven’t seen anything yet.
But in the meantime, it’s good to understand what it is and isn’t, the moreso if you plan on using it or building on it. Don’t be fooled by its fluidity with text into thinking it is more than it is.