With this (CVE-2026-41940) and copy.fail (CVE-2026-3143), it must be an exciting time in the shared hosting business right now… Glad I've been out of it for a long time.
When discarding storage, I do a random pass (even if the drive has always been part of an encrypted-at-rest arrangement, if only for the sake of habit), then a zero sweep, then it gets a filesystem created and filled with many copies of a few cat photos/videos¹² to give anyone running something like photorec a treat, then the partition table is emptied.
--------
[1] With filenames to suggest a bumper collection of photo/video backups from several people's phones/cameras, with some porn accidentally mixed in.
[2] If I'm in an evil mood the “treat” filesystem is filled with shock images and Rick Astley instead.
Certainly not in the case of asking it to do something you'd be slow at because you are unfamiliar. If you are not familiar enough with the system, how are you confident that what the LLM has produced is valid and complete? IMO the people saying LLMs make then 10x faster were either very bad to start with (like me!) or are not properly looking at the results before throwing them over the wall.
And how do you know if that is the case or the person/team using the LLMs is one of the good ones?
This is the crux of the problem. LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.
Speed is seductive.
The bar isn't "this is a known good contributor". Its "this is a known good contributor working in a space they have knowledge in and has a track record of actually checking and thinking about LLM output before submitting it." It's much higher and I don't see how you can approve people on an organization-wide basis.
> LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.
> Legal professional here. This is NOT a replacement for proper legal AI assistants (e.g. Westlaw, in my jurisdiction). As far as I can tell, this is just a wrapper around regular LLMs i.e. nothing that you couldn't achieve yourself with the right prompting.
I'd not use generative AI for anything but a cursory check anyway⁰. Even if it is trained on clean up-to-date data rather than all the wrong information that is out there, it could still give a wrong answer and I have no leg to stand on if I rely upon it. At least if I pay a human and they trust the LLM too much, I'll hopefully have some call to pursue them for giving bad advice when it bites me.
--------
[0] Or at all… But even if I wasn't someone actively avoiding LLMs, the point would still stand
Call me an old crusty Luddite if you will, after all you'd not be wrong, but…
I feel that if I can't work something out without asking a generative ML model, then I probably don't understand it well enough to properly assess the generated answer, and if I didn't understand the documentation well enough in the first place then “verify it against the documentation” is not a suitable answer, so I probably shouldn't be self-hosting that system on the open network.
It is quite irritating that the existence of generative models is apparently becoming an acceptable excuse for inadequate documentation. Rather than suggesting that I ask copilot when the documentation Azure is lacking, perhaps MS should as copilot to generate some better documentation (and have their human domain experts review it for correctness) so we have good documentation to work from. It strikes me that them using a bunch of LLM crunching power up-front is likely to me more efficient than a great many of us spending smaller amounts or resources each (many of us asking the same questions) at the point of consumption.
> because writing software doesn’t require GitHub.
If your workflow doesn't need the features that have had reliability problems over recent times (which includes some of the basic collaborative features), is GitHub even the right tool for your task? If not, then your judgement of others for complaining about the issues is presumptuous to the point of being somewhat obnoxious.
The post being replied to essentially said “I don't use the features that have been regularly broken in recent times, or where the features I do use [core git] were broken it luckily didn't affect me, so anyone thinking of leaving has a mental problem”.
Or to paraphrase the old joke:
Q: How many programmers does it take to change a lightbulb?
A: The sunlight through the windows here is working fine, if you can't see where you are that must be a “you” problem.
That ridiculous bit of “modern” slang… that has been in use for a few hundred years?
Not a word I use much myself except when referring to “yappy little dogs”, but it is definitely common among those the generation above me and that above them.
I think it's pretty obvious that it's being used differently here. And in a way that is annoying enough to me to guarantee that I don't make it past that word.
reply