Hacker Newsnew | past | comments | ask | show | jobs | submit | mzhaase's commentslogin

Something that I have started doing is to tell it to spawn a reviewer agent after every code change, and in the Claude/rules folder I specify exactly how I want stuff done.

Mostly I use it to write unit tests (just dislike production code that is not exactly as I want). So there is a testing rule for all files in the test folder that lays out how they should be done. The agent writing the tests may miss some due to context bloat but the reviewer has a fresh context window and only looks at those rules. So it does result in some simpler code.


You can do everything from C#, rider launches the player only for debugging if you want. The only thing you probably do want to use the UI for is for... building UIs.

Why should we only do things that produce some sort of value? Do we really want to reduce all of human existence to increasing profits?

You said "value" and "profit". I said "useful".

What’s a better method for determining how to utilize and distribute resources? To determine where energy should be used and where it should be moved from?

Some things are just enjoyable. I get no real utility from photography - it’s not my career, it’s not a side gig, and I’m not giving prints out as gifts. Most of the shots never get printed at all. I do it because I enjoy the act itself, of knowing how to make an image frozen in time look a particular way by tweaking parameters on the camera, and then seeing the result. I furthermore enjoy the fact that I could achieve the same result on a dumb film camera, because I spent time learning the fundamentals.

It always occurred to me that LLMs may be like the language center of the brain. And there should be a "whole damn rest of the brain" behind it to steer it.

LLMs miss very important concepts, like the concept of a fact. There is no "true", just consensus text on the internet given a certain context. Like that study recently where LLMs gave wrong info if there was the biography of a poor person in the context.


I think much along the same lines. LLMs are probably even just a part of the language center.

And of course they also miss things like embodiment, mirror neurons etc.

If an LLM makes a mistake, it will tell you it is sorry. But does it really feel sorry?


> But does it really feel sorry?

And what does it mean to feel sorry? Beyond fallible and imprecise human introspective notion of "sorry", that is. A definition that can span species and computing substrates. A deanthropomorphized definition of "sorry", so to speak.


Ever practiced meditation of the form where you just witness your thoughts? It seems just like LLM generated words, both factual and confabulated nonsense.


thats unlikely. but they are awfully lot like turing machines (k/v cache ~ turing tape) so their architecture is strongly predisposed to be able to find any algorithm, possibly including reasoning


Until it can disassemble a robot to attach a programmer to the mainboard, it cannot.


It can, it has meat buttons it can press or boss around.


We are all agents now


Instead of market capitalization, have you looked at comparisons for happiness?


Or even lifespan… It’s crazy that USA is so ahead in tech yet life expectancy is 78 versus 81 in Germany or 84 in Spain


Liquid oxygen has the same color.



The title is: This ESP32 Antenna Array Can See WiFi

And every time I see something like this I like to remind to myself and imagine what spherical grid of Starlink satellites linked by laser is really capable of instead of mere internet as it is advertised.


your link has the "si=" tracking parameter in it



Instead of the vibe-admin approach, why not have the LLM write an Ansible playbook? At least its repeatable and auditable that way.


Water based or solvent based paint?


Should be solvent based.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: