Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do hacks like “read prompt.md, and follow its instructions. When you’re done, read it again and follow its instructions.” And then you have some background process appending to the file to keep it warm and you just keep writing there?
 help



You could do that. I was just trying to say that if you make your original prompt complete enough, and you have well-defined success criteria, you can tell it to keep going until they are met.

Agreed - my experience mirrors this.

> "Fix the following compile errors" -> one shot try and stops.

> "Fix the following compile errors. When done, test your work and continue iterating until build passes without error" -> same cost but it gets the job done.


There is a limit on how much copilot can do in one request, pretty generous but after some time vscode will say "this request is taking very long, do you want to continue" and that would count as a seperate request

> but after some time vscode will say "this request is taking very long, do you want to continue" and that would count as a seperate request

I don't think that's true. In VS Code, that's also configurable via the chat.agent.maxRequests setting.

There was absurd latency in the Copilot Opus 4.6 model on 1st and 2nd April which led to lots of my requests timing out with nothing to show though.


> chat.agent.maxRequests

"Maximum number of requests that copilot can make using agents"

I don't get how this setting is relevant?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: