Do LLMs actually know anything?

For example, can an LLM know if dogs are hairy without knowing what dogs are or what hairiness means?

Altman and Sutskever are amusingly coy on the subject.

“Well, they certainly learnt something” - Sam Altman

“Saying what understanding means is hard, and so we measure prediction instead” - Ilya Sutskever

This is part of “intelligence equals prediction”, which maybe explains how OpenAI thought of all this in the first place.

Representing ideas

What did Sutskever mean when he said this?

An LLM is great for learning about the world (including representations of ideas), but not so good for producing output

Here is Gemini’s answer.

Transformer Multitasking

Here is another Sutskever quote:

during training the transformer is forced to multitask a huge number of tasks

And here are Gemini’s thoughts.

Other

Do LLMs actually know anything?

Can they list the things they know?

Will LLMs ever be sure of anything, or will they always hallucinate?

I think it’s helpful to think of LLMs (and other generative AI) as machines for making more stuff like the training data. So you’ll always get some stuff which is “wrong” unless you have a way to map the output of the LLM onto a space of purely “true” facts.

Do they build a model of the world (as opposed to just a language model)? How would you tell the difference ?

Are these LLMs an important step towards artificial science and engineering?

Are they on the road to superintelligence?

Can the LLM knowledge be extracted in other forms, or only as verbiage?

What did Sutskever mean when he said “an LLM is great for learning about the world (including representations of ideas), but not so good for producing output”? - https://www.youtube.com/watch?v=SjhIlw3Iffs