Yet few quantified benchmarks, few examples, few written up discussions and few technical documents to serve as a fixed reference.
I'm not saying everything needs to be a bullet point presentation but in the era of llms to help plug the gaps with this stuff, "look at the mailing list" or "watch irc" isn't much better than anecdotal.
Again, recovering data, laudable. Hell, maybe impressive to compare the situation to equivalents on other fs to discuss why this is better than X. But a simple "we get the data back compared to Y" just reads as Y bashing unless there's metrics, or clear technical reasons as to why you're superior.
I _want_ bcache to be better for numerous reasons. Everyone wins from a better product. But realistically getting there means telling some people that they need to wait.
Frankly if there is a need to support users who demand mainline access to the latest and greatest _NOW_. Adopt/appoint a supported os/distro and roll your own nightly packages into a simple repo. If you have to rush upstream because a user can't cope with "pull your kernel sources from https://...." They can't cope with compiling it correctly so there's little (if anything) to be gained from rushing into mainline next week constantly...
Benchmarks? I hope you mean detailed writeups on robustness, because performance is not a consideration yet.
I agree that more thoughtful analysis would be helpful, but I have to work with what I've got :)
One of the recent pull request threads had a user talking about how bcachefs development is being done "the old way", from the earlier days of Linux; less "structure", less process, so that we can move quickly and - critically - work effectively with users.
I liked that comparison, and I think that's a big part of why bcachefs has had the success it's had with users (it's a proven recipe! Linux did displace everything else, after all). And on top of that, we're doing it engineering best practices that have advanced significantly since then. Automated testing, a codebase that heavily uses assertions (there's a lot I've said elsewhere about how to effectively use assertions; it's not preconditions/postconditions), runtime debugging, and more. It started too early to be written in Rust, that's really the only thing I'd change :)
People just need to be patient - this stuff takes time. The core design was done years ago, on disk format was frozen in 6.15, and now we're in a pretty hard freeze and doing nothing but fix bugs. The development process has been working well, it's been shaping up fast.
> Benchmarks? I hope you mean detailed writeups on robustness, because performance is not a consideration yet.
I don't think there is ambiguity. Either you have some way to objectively corroborate your personal claims, or you don't. Those are called Benchmarks. Performance is a specific type of benchmark test, but it's not the only one.
Either you have those or you don't. Making assertions and claims without benchmarks is not a confidence- or reputation-builder.