Posting this because I am not using anything like this or OpenClaw, and I would like the opinions of people actually using these technologies to understand if it's good or not.
Totally speculating here, but maybe they provide some sort of LLM as a service, and they rotate stolen API keys in the background so they don't have to pay anything ?
Or they use the LLMs for criminal purposes (like automated social engineering) and so the API key can't be traced to their personal info (but they could also use a local model for this, so I don't know).
It's weird, because they should not consider it as their own, but they should take accountability from it.
Ideally, if I contribute to any codebase, what needs to be judged is the resulting code. Is it up to the project's standards ? Does the maintainer have design objections ?
What tool you use shouldn't matter, be it your IDE or your LLM.
But that also means you should be accountable for it, you shouldn't defend behind "But Claude did this poorly, not me !", I don't care (in a friendly way), just fix the code if you want to contribute.
The big caveat to this is not wanting AI-Generated code for ideological reasons, and well, if you want that you can make your contributors swear they wrote it by themselves in the PR text or whatever.
I'm not really sure how to feel about this, but I stand by my "the code is what matters" line.
reply