Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In many examples, LLMs betray the fact that they are not reasoning, because when provided with problems that can be solved with the ability to reason, they fail.

Even in this discussion someone provided an example of coming up with board game rules. LLMs found all board game rules valid, because they looked and sounded like board game rules. Even when they were not.

In short, You can learn a subject, you can make a mental model of it, you can play with it, and you can rotate or infer new things about it.

LLMs are more analogous to actors, who have learnt a stupendous amount of lines, and know how those lines work.

They are, by definition, models of language.

IF you want a better version - GENAI needs to be able to generate working voxels of hands and 3D objects just from images.

 help



I don’t believe the board game rules example. I think this would be a piece of cake for an llm. I’m happy to be proven wrong here if you share an example.

This is the user I took the example from: https://news.ycombinator.com/item?id=47689648#47696789



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: