The most beautiful idea in deep learning is that it actually works - Ilya Sutskever

In 2015, when we said we are going for AGI, people thought we were batshit insane. We don’t get mocked as much now - Sam Altman

If coding became 10x more productive, we would have more code, rather than fewer programmers. There’s a supply issue - Sam Altman

Tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought - Stephen Wolfram

Deep learning equals getting something from applying scale. What did people used to do with supercomputers? - Ilya Sutskever

LLMs are like System 1 - Andrej Karpathy

We have to view this as a—potentially surprising—scientific discovery: that somehow in a neural net like ChatGPT’s it’s possible to capture the essence of what human brains manage to do in generating language - Stephen Wolfram

If I were a tester in a Turing Test, I feel like I wouldn’t have to wait for ChatGPT to come up with nonsense in order to decide it’s a computer. As someone put it in another thread, it’s way more “erudite” than a human. That is, even with the caveat that it doesn’t fundamentally understand what it’s saying, it can construct plausible, voluminous and often correct discourse on way more topics than a human ever could - Chuckstar

Everyone grabs everything they can, they dump it in a huge file, and they kind of set it on fire to train some huge thing, and no one really knows yet what data in the pile actually matters - David Holtz

GPT is now at the level of a somewhat confused maths undergraduate who doesn’t really understand the material but has studied everything - Terence Tao

With GPT questions, the vaguer the better sometimes - Terence Tao

The thing about language models is the more I look at them, the more I think that they’re fractally interesting. Focus on any particular aspect, zoom in and there are just more questions, more unknowns and more interesting things to get into - Simon Willison

GPT will have a shocking degree of understanding (because to do good prediction requires understanding the underlying structure of stuff (the underlying reality), as seen through the lens of text - Sutskever

GPT-4 can just about add 40-digit numbers. It has learnt an internal circuit for how to do it - Greg Brockman

It’s part of the lore of neural nets that—in some sense—so long as the setup one has is “roughly right” it’s usually possible to home in on details just by doing sufficient training - Stephen Wolfram

Cases that a human “can solve in a glance” the neural net can solve too. But cases that require doing something “more algorithmic” the neural net tends to somehow be “too computationally shallow” to reliably do - Stephen Wolfram