- Forward Future AI
- Posts
- It’s Meaning Cats and Dogs!
It’s Meaning Cats and Dogs!
In my previous article, we discussed the concept of meaning, in particular how meaning is represented in large language models. At one point I hinted about the notion that words are not the same as their meaning. Let’s explore this concept further.
As humans, we have an instinct for spoken language. We are genetically programmed to be able to speak and hear language. It must also be emphasized that written language is not part of this biological makeup. Writing is technology, it’s a human artifice, it was invented very recently in the span of human existence compared to how long the species has used spoken language.
There is some debate as to whether we think in language or our thoughts are on an entirely different plane to language as we know it. Let’s go with this distinction, that thoughts and language are not the same. After all, you’ve certainly said “what’s the word I’m looking for” even when your intention might already be clear.
Most writing is based on conveying the sounds of a language by depicting the finite set of sounds by symbols - the letters we are reading here. Conventional software is capable of manipulating these symbols, letters, which of course is nothing new. But the point here is, when you’re looking up a dictionary app on your computer, or words on a spreadsheet or a database table, the computer program doesn’t actually make sense of these words - they’re just random combinations of letters.
But that’s where AI is different - these LLMs actually ‘get’ the semantics of words, they can even recombine them in useful ways, even coining new words in the process. That is indeed why it has ‘intelligence’ in a sense that conventional software does not. In your dictionary software, the word ‘cat’ is closer to ‘car’ than to ‘dog’. And as I’ve mentioned, meaning is ‘given’ by us to each of these words when we ‘see’ them, and with the knowledge of that language. Thus for someone who hasn’t learnt English, words like ‘car’ and ‘cat’ may make no sense at all, or could have an entirely different meaning in their language.
In fact, words as we normally use them can be somewhat inadequate in representing meaning. We use the word ‘word’ somewhat loosely - is healthcare one word or two words? If that notion is conveyed by “a single word”, what about ‘credit card’? Why would we not say that’s “a single word” too? My point here is, the n-dimensional semantic space we discussed last time is a lot more rigorous way of capturing meaning than our relatively clumsy method of using symbols.
Furthermore, the concept of distance – yes thank you for holding on to that! – does indeed come in handy. Thus the points in the semantic space that represent a cat would be closer to ‘dog’ rather than to ‘car’. This of course makes sense, and our brains do the same as well. This is indeed an important factor in why these AI models are capable of handling language, in all its nuance and complexity.
Furthermore, in this n-dimensional semantic space, it’s not just the meanings of individual words that are represented, but even whole phrases, sentences, paragraphs and more. When you get those recommendations on a website with words like ‘You may also like…’, in relation to say book or movie titles, it is indeed this feature at work.
Just as the language model recognizes the semantic closeness of cats and dogs compared to cars, the semantic representation of similar movie titles would be closer to each other than something else less similar.
This capability of language models to handle semantic space has in fact proved to be very useful and in many areas, the recommendation system example just mentioned being just one of them. It is also one of the reasons why AI is and can be very powerful, as we will continue to explore in this series.
About the author
Ash StuartEngineer | Technologist | Hacker | Linguist | Polyglot | Wordsmith | Futuristic Historian | Nostalgic Futurist | Time-traveler |
Reply