If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
Yeah you need native code execution, and if you have AF_ALG access there is clearly no sandboxing in place. At that point it's game over on Linux, there are too many bugs. Even if you fix all the known ones in the current kernel, by the time the version with those fixes is qualified and released (not to mention, the machine must reboot), new LPEs have been discovered.
Look at the CVE database. Most of those UAFs are LPE. Many of the OOBs and many of the race conditions too. These are fixed in Linus' master but you are running an old kernel.
Then look at the KASAN reports on the syzkaller dashboard. Many of them are LPE. Many of the WARNs and crashes are revealing and underlying bugs that is also an LPE. Most of these never get fixed.
Then try pointing your LLM at the codebase and saying "find an LPE". It will find as many as you want (you will exhaust your tokens long before it stops finding bugs). 99.99% of them will be bogus so you need a way to evaluate them at scale, currently this is the weakest approach but we'll get better at it.
I can't actually point you to a list of confirmed LPEs coz the only way they get confirmed is when someone exploits them, but there aren't enough exploit authors to do this for all of them. If inference gets really cheap and someone builds a really good agent harness we might start to see it get automated at some point.
My mind immediately went to chaining this with another recent vulnerability in the Ninja Forms - File Upload plugin [0]
> This makes it possible for unauthenticated attackers to upload arbitrary files on the affected site's server which may make remote code execution possible.
So, upload and execute a script that loads Copy Fail and even if you're only executing as www-data or another restricted user that "can't" sudo -- suddenly, uid=0!
Yes but what I'm saying is that copy.fail is a minor detail in this scenario.
If you are running Ninja Forums you need to run it in its own VM so that if it gets compromised _you don't care if it has uid=0_.
You need to do that regardless of copy.fail. Now that you've patched copy.fail, there are loads and loads of other vulns that can be used the same way.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
It's not RCE it's an LPE in an obscure corner of the kernel attack surface that no sensible application depends on. They are absolutely a dime a dozen.
Even just in AF_ALG there have been several such vulns fixed in 2026 already. Kernel wide probably hundreds. It's true that most of them will be harder to exploit than this one but that just means you need to prompt your AI a bit harder to get an exploit. (To be fair, in a lot of cases it's gonna be hard to escalate privs without crashing the machine).
Ubuntu has userns restrictions now which takes away the main sources of LPEs (random qdiscs, nftables, all that garbage) but there are still huge numbers of these vulns.
This is why platforms that do native untrusted code executions have extreme sandboxing. Note Android and ChromeOS aren't affected coz they already knew this code was broken and hide it from unpriv workloads.
You can't run untrusted code on Linux without either a very very carefully designed sandboxing layer (like Android/ChromeOS) or virtualization. copy.fail is just one among tens of thousands of reasons for this, and it's a pretty uninteresting one at that.
What is "special" depends on your usecase but for my job it's mostly about stuff that's exposed to KVM guests. Biggest source of concerning vulns for us is probably vhost. I expect there are also lots of undiscovered and scary vulns in places like virtiofs, vfio, DAX, and wherever we do device passthrough.
> I could find any places running containered services and exfiltrate secrets parallel services, no?
Yes. Regardless of copy.fail. Cloud providers don't do that without a VM layer. (If yours does, you need to switch).
The cope of some people is insane. Why even have UID:GID? All you need is 0:0. I always tell people to run everything as root because there is literally no point.
Well, there's still value in users and namespaces! Just, it's not a strong security boundary.
Also even if it's not strong, it doesn't mean it's entirely worthless. You can't rely on it, but it's usually free and it still buys you time / increases attack cost.
Like, if you leave 100k cash in a car on the street in SF, that's dumb. If you really need to do that for some strange reason, you should hire a security guard to watch your car, because cars a not a good security boundary. BUT, that doesn't mean you would leave the car unlocked just coz someone's watching it!
They're not dime a dozen exactly but LPE bugs in Linux (and common Linux distros) are easily common enough that nobody sane relies on user isolation as a serious security boundary.
Clouds use VMs as the security barrier, which is also not always 100% perfect, but is much better.
It could be useful as part of an exploit chain but generally once you've got to local code execution it's not going to be difficult to get further.
A "special" bug would be something that defeats a security barrier that people actually use, e.g. something that works remotely, or as you say - a hypervisor hack.
99% is usually the best you can do. So you can only layer multiple defences together, this makes sense as one layer to me.
I have an issue with security layers that are inherently nondeterministic. You can't really reason strongly about what this tool provides as part of a security model.
But also, it's in an area where real security seems extremely hard. I think at some point everyone will have a situation where they wanna give an agent some private information and access to the web. You just can't do that in a way that's deterministically safe. But if there are usecase where making it probabilistically safer is enough to tip the balance, well, fine.
> And to agree with others on this thread, the folks who push for war should 100% be required to participate in them and lead from the front
I agree but I don't think it goes far enough. Leading from the front of the best equipped military in the world doesn't balance your incentives against the misery you are inflicting on the innocent denizens of the poor country you're pointlessly destroying.
There's also the economic destruction back home to balance against. So, those who call for war should be forbidden to privately fund their healthcare and children's education.
Agree. I believe during WW2 the government put rules in place to prevent companies from making too much profit from the war. From what I recall in history class taxes were raised significantly as well.
War is a mighty economic engine, this cannot be denied. But if we take an entire country to war, then it stands to reason that the entire country should benefit from the spoils (to the extent that there are any).
I may be misunderstanding but I don't think so. War forces people's hand in terms having to make progress. This is because during progress can be measured in number of body bags returning from the front and the reduction thereof.
Our modern world was born out of scientific advancements made during WW2. Could these same achievements have occurred in peace time? Obviously the answer is yes. However during war, everything becomes accelerated and things that normally would take a long time can happen very quickly.
I agree that paying for scientific progress with human lives is a bad thing.
Not just with human lives but with staggering amounts of economic growth. In a globalised system there's absolutely no way the stimulation of war pays for the destruction and disruption.
Yes; WWII was an economic disaster for huge swaths of the world. The US is pretty much the only industrialized country at the time where it wasn't a complete economic disaster, because it was separated by oceans from nearly all the fighting and destruction.
If there's a shootout in a town that ends up with most peoples' windows getting shot out, the one town glazier will make money off of this, even though it's a net-negative for the town as a whole.
You can use a reverse proxy and still have working app auth, I have set this up via Authelia with the OIDC Jellyfin plugin.
However:
- This is EVEN MORE complex than "just" a reverse proxy.
- I'm not really sure it wins much security, because...
- at least I'm not relying on Jellyfin's built-in auth but I'm now relying on its/the plugin's OIDC implementation to not be completely broken.
- attackers can still access unauthenticated endpoints.
Overall I really wish I could just do dumb proxy auth which would solve all these issues. But I dunno how that would work with authing from random clients like Wii (and more importantly for me, WebOS).
Ha, I had a similar story with Jekyll but my build wasn't containerised. At some point it stopped being compatible with the latest [something. Ruby? Gems? I don't care, just build my fucking HTML templates please] so I just migrated to Hugo.
I stuck around on Hugo for quite some time and I've never had any such issues yet, but now I've also wrapped the build in Nix. So yeah I'll do the same - if it ever stops working I'll just pin the build inputs at the last version that worked.
I _think_ the Hugo folks seem to understand the "just build my fucking HTML templates" principle. I.e. for most use cases the job of a static site generator is simple enough that breaking compatibility is literally never justified. So hopefully pinning won't be necessary.
Just last week updating Hugo broke my templates. That‘s happening every few months. They deprecate and then remove or rename template variables like crazy.
Yeah, I really don't understand why some developers have an extreme compulsion to constantly deprecate and rename things like this, causing massive upgrade headaches to users.
In addition to Hugo, it happens constantly in GoReleaser. In both cases, they're excellent tools, but the unending renaming spree is just awful. Weirdly, both are in the Go ecosystem, which generally values backwards compatibility.
Damn that's interesting I have not run into that at all after about 4 years.
Maybe it's just that my site is extremely dumb? I forked an "ultra minimal" theme and deleted most of its code. So perhaps I just use such a tiny subset of the template system that I haven't been affected.
The article itself acknowledges that the headline is bullshit:
> The change isn't about the core operating system becoming resource-hungry. Instead, it reflects the way people use computers today—multiple browser tabs, web apps, and multitasking workflows
Basically the change reflects the fact that, at this level of analysis (how much RAM do I need in my consumer PC), the OS is irrelevant these days. If you use a web browser then that will dominate your resource requirements and there's nothing Linux can do about that.
If this really works there would seem to be a lot of alpha in running the expensive model in something like caveman mode, and then "decompressing" into normal mode with a cheap model.
I don't think it would be fundamentally very surprising if something like this works, it seems like the natural extension to tokenisation. It also seems like the natural path towards "neuralese" where tokens no longer need to correspond to units of human language.
But it can't, we see models get larger and larger and larger models perform better. <Thinking> made such huge improvements, because it makes more text for the language model to process. Cavemanising (lossy compression) the output does it to the input as well.
but some tokens are not really needed? This is probably bad because it is mismatched with training set, but if you trained a model on a dataset removing all prepositions (or whatever caveman speak is), would you have a performance degradation compared to the same model trained on the same dataset without the caveman translation?
There was actually a post on here a few months back where someone claiming robotics expertise posted exactly what you asked for: a list of things they didn't think robots were close to being able to do.
IIRC the list included folding textiles, and soon after a video was released of a robot folding textiles, but it was very janky, it's not clear to me if it proved the original article wrong or was more of an "exception that proves the rule".
Personally I have my washing machine in the basement, you need a key to access it (and I can't modify it, it's a shared space in a building I don't own). I'm always thinking about that. A robot that can do my laundry and open locked doors doesn't seem to be on the horizon yet.
Trust me, plenty of millionaires are doing their laundry in a shared Waschküche in Zürich!
Current Chinese dev bots cost like $15k. Vapourware startups are claiming they'll ship their humanoid robot product at $20k. I'd pay that in a heartbeat for robot that could actually do my laundry.
(But more impactfully surely there are loads of Californians with a utility room in their garage, or a basement that can't be accessed from inside the house)
(Also... I just realised, if there were robots that could do laundry, but couldn't navigate to my basement, I would move. I think laundry bots would genuinely be that desirable)
The companies servicing that echelon would replace staff as soon as they could. In an apartment, the building owner would plant one in the shared laundry and add an optional price for tenants to use it.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
reply