Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm halfway through Foundation on Apple TV and this piece landed hard (you had me at Asimov) because of it. Asimov's whole deal with psychohistory is that you can predict what large populations do even when individuals are unpredictable. Seldon doesn't need anyone to be honest; he needs the math to converge on something real about how people actually behave.

LLMs are sort of the inverse of that. They produce text that looks like the statistical aggregate of human knowledge, but nothing underneath is converging on truth. Seldon's math worked because it modeled actual dynamics. LLMs work because they model plausible text. The "jagged competence frontier" Kingsbury describes: crushing multivariable calculus, failing a word problem, is exactly what you'd get from a system that learned the shape of correct answers without learning what makes them correct.

The part of Foundation that feels prescient right now isn't the predicting-the-future stuff. It's the part where everyone can see Empire is hollowing out and the response is to just...keep going. More spectacle, more confidence, less substance holding any of it up. Hmmm, wait...

 help



how are you enjoying the live action saturday morning cartoon version of Foundation with bonus plucky protagonists?

Ha! Yeah, it is not the books. That said, it's been long enough since I read them that I didn't feel too annoyed ("oh wait, I don't remember that in the books" did come up more than once, as well as "oh wait, they're mixing up a bunch of books, is this the Robot series?". It's done really well personally, but hey, this is a way to make the money come in longer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: