Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find LLMs useful in regurgitating one-liners that I can’t be bothered to remember or things where even being flat out wrong is okay and you just do it yourself.

For all the folks spending a lot of time and energy in setting up MCP servers, AGENTS.md, etc. I think this represents more that the LLM cannot do what it is being sold as by AI boosters and needs extreme amounts of guidance to reach a desired goal, if it even can. This is not an argument that the tech has no value. It clearly can be useful in certain situations, but this is not what OpenAI/Anthropic/Perplexity are selling and I don’t think the actual use cases have a sustainable business model.

People who spend the energy to tailor the LLMs to their specific workflows and get it to be successful, amazing. Does this scale? What’s going to happen if you don’t have massive amounts of money subsidizing the training and infrastructure? What’s the actual value proposition without all this money propping it up?



> I find LLMs useful in regurgitating one-liners

This was the case for me a year ago. Now Claude or Codex are routinely delivering finished & tested complete features in my projects. I move much, much faster than before and I don’t have an elaborate setup - just a single CLAUDE.md file with some basic information about the project and that’s it.


People keep saying this and I agree Claude has gotten a lot better even in my own experience, but I think the value is questionable.

What’s the point of adding features that are inscrutable? I have gotten Claude to make a feature and it mostly works and if it doesn’t work quite right I spend a massive amount of time trying to understand what is going on. For things that don’t matter too much, like prototyping, I think it’s great to just be able to get a working demo out faster, but it’s kind of terrifying when people start doing this for production stuff. Especially if their domain knowledge is limited. I can personally attest to seeing multiple insane things that are clearly vibe coded by people who don’t understand things. In one case, I saw API keys exposed because they were treating database users as regular user accounts for website login auth.

> I move much, much faster than before

This is a bad metric as has been attested multiple times in unrelated situations. Moving faster is not necessarily productivity nor is it value.


That was equally true of human written code that you didn’t write. So if a human had written that insecure program, what would the consequences be ? Would they go to prison? Would they lose license to practice? When they get sued? If the answer to all of these is no, then where was the assurance before? These anecdotes of “well one time I saw an AI written program that sucked!” are just as valid as “well one time Azure exposed government user data”


> What’s the point of adding features that are inscrutable?

You are assuming that the additional speed comes at a cost of codebase comprehension. For me it's not the case - I never push generated code I don't fully understand. It does take time, sure, but it still takes me much less time to write a spec, execute with AI and then review than write the thing myself.


This matches my experience. I've been building structured pipelines around LLMs, and the biggest lesson is that the raw model is maybe 30% of the value. The other 70% is the methodology you wrap around it; what data you feed in before the conversation starts, what you do when the model gives a weak answer, and whether you track open questions and circle back to them.

The irony is that "extreme amounts of guidance" is exactly what makes a human domain expert valuable, too. A senior consultant isn't smarter than a junior one; they have a better methodology for directing attention to what matters. The actual problem with the "just throw an agent at it" approach isn't cost. It's that without structure, you can't tell the 10% of useful output from the 90% of noise


> I find LLMs useful in regurgitating one-liners that I can’t be bothered to remember

I found LLMs make a fabulous frontend for git :-D


ah, you've found the danger zone!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: