Async ruined Rust for me, even though I write exactly the kind of highly concurrent servers to which it's supposed to be perfectly suited. It degrades API surfaces to the worst case :Send+Sync+'static because APIs have to be prepared to run on multithreaded executors, and this infects your other Rust types and APIs because each of these async edges is effectively a black hole for the borrow checker.
Don't get me started on how you need to move "blocking" work to separate thread pools, including any work that has the potential to take some CPU time, not even necessarily IO. I get it, but it's another significant papercut, and your tail latency can be destroyed if you missed even one CPU-bound algorithm.
These may have been the right choices for Rust specifically, but they impair quality of life way too much in the course of normal work. A few years ago, I had hope this would all trend down, but instead it seems to have asymptoted to a miserable plateau.
I make use of all of those, but still prefer avoiding Async, for the typical coloring reason. I can integrate the above things into a code base with low friction; Async poses a compatibility barrier.
But yes, once you go dining on other people's crates you definitely get the impression that you have to, because tokio gets its fingerprints all over everything.
But also there are non-thread stealing runtimes that don't require Send/Sync on the Future. Just nobody uses them.
this is dead true to me. I write systems code. Rust is supposed to be a systems language. Because I do work that is effectively always written as if it's in the kernel mode and distributed over the network, everything I do is async by default. And the ergonomics around async are just miserable, littered with half-finished implementations and definitions (i.e. async streams, async traits), and a motley bunch of libraries that are written to use particular modes of tokio that don't necessary play well together. its a giant bodge that would be excusable if that wasn't supposed to be part of the core applicability of the language. not to mention that the whole borrower business becomes largely useless, so you forgot to add Arc+Mutex, and Pin implicitly to your list of wrapper type signatures.
what bother me the most, is that aside from async, I _really_ do like the language and appreciate what its trying to do. otherwise I would just turn away from the whole mess. this just landed really badly.
I completely agree. I really like rust, but all the async stuff is so half baked. It’s shocking coming from the JavaScript ecosystem. Async feels - comparatively - incredibly simple in JS. Even async streams are simple in JS and they work great. And I don’t have to wait 10 years for the linker to process all of tokio for a 1 line change.
I think they mean tokio::spawn’s signature forces libraries that want to be easy to use with it to expose send+sync APIs (and thus use Arc+Mutex internally)
If that's the kind of UX you prefer, please consider filing a feature request against your git UI of choice. My point is that git itself already has the core capability, and how convenient it is to use usually depends on your editor. (e.g. in vim, dd to cut a line and p to paste it in a new position is a very quick way to reorder)
And my point is that all this 'core capability' stuff is not relevant to the discussion of good UI, similarly the fact that GitHub has Pull Requests doesn't help when it's bad UI that needs "stack" reinventing.
Case in point:
> dd to cut a line and p to paste it in a new position is a very quick way to reorder)
It isn't quick, you're just swiping the whole issue under the rug - first, you need the whole separate interface, but more importantly, this new interface is very primitive, you see close to no context, only some commit names, so it's not quick to find what to move and where because the content for those decisions is in a different place. Sure, you could add some vim plugin that expands it and adds per-commit info (what, you want to view the diff for all 3 commits you selected and DDed? Tough luck, you don't see the lines anymore! And even if you did, that's not this plugin), but then it's not your `--interactive` git "core" that does convenience
Like I said, if you prefer an integrated graphical UI, you can file feature requests against the one you prefer. What git itself does makes a lot of sense for the canonical CLI tool to do, though even then you can propose or prototype changes if you have ideas. This is how projects like jj started in the first place.
I only agree if you have a bounded dataset size that you know will never grow. If it can grow in future (and if you're not sure, you should assume it can), not only will many data structures and algorithms scale poorly along the way, but they will grow to dominate the bottleneck as well. By the time it no longer meets requirements and you get a trouble ticket, you're now under time pressure to develop, qualify, and deploy a new solution. You're much more likely to encounter regressions when doing this under time pressure.
If you've been monitoring properly, you buy yourself time before it becomes a problem as such, but in my experience most developers who don't anticipate load scaling also don't monitor properly.
I've seen a "senior software engineer with 20 years of industry experience" put code into production that ended up needing 30 minute timeouts for a HTTP response only 2 years after initial deployment. That is not a typo, 30 minutes. I had to take over and rewrite their "simple" code to stop the VP-level escalations our org received because of this engineering philosophy.
> You're much more likely to encounter regressions when doing this under time pressure.
There is nothing to suggest you should wait to optimize under pressure, only that you should optimize only after you have measured. Benchmark tests are still best written during the development cycle, not while running hot in production.
Starting with the naive solution helps quickly ensure that your API is sensible and that your testing/benchmarking is in good shape before you start poking at the hard bits where you are much more likely to screw things up, all while offering a baseline score to prove that your optimizations are actually necessary and an improvement.
> Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants.
I get where he's coming from, but I've seen people get this very wrong in practice. They use an algorithm that's indeed faster for small n, which doesn't matter because anything was going to be fast enough for small n, meanwhile their algorithm is so slow for large n that it ends up becoming a production crisis just a year later. They prematurely optimized after all, but for an n that did not need optimization, while prematurely pessimizing for an n that ultimately did need optimization.
My best professional relationships are between people who are confident enough to take direct feedback and appreciate it rather than resent it.
However, my worst professional relationships are with people who will rebuke your feedback whether you Crocker it or not. If you're direct, they'll say you should have been more diplomatic about it, but if you're diplomatic, they'll say you're being dishonest and should have been direct. There is no right way to approach it, these people will always find a way to criticize the delivery, and to delegitimize the feedback because of it.
This seems as good a thread as any to mention that the gzhttp package in klauspost/compress for Go now supports zstd on both server handlers and client transports. Strangely this was added in a patch version instead of a minor version despite both expanding the API surface and changing default behavior.
About the versioning, glad you spotted it anyway. There isn't as much use of the gzhttp package compared to the other ones, so the bar is a bit higher for that one.
Also making good progress on getting a slimmer version of zstd into the stdlib and improving the stdlib deflate.
Yeah, I make it a habit to read the changelogs of every update to every direct dependency. I was anticipating this change for years, thanks for doing it!
I think the big difference is that if you just want to optimize for some objective, it's usually very clear how to do that from Apple's options, so there's not much research to be done. It can still be challenging to choose what's the best value when it's your own money, but at least you know what you're getting, and the quality hasn't been a concern for years.
"Former Googlers" were probably used to using protobuf so they could get from a function call straight out to a struct of the right schema. It's one level of abstraction higher and near-universal in Google, especially in internal-to-internal communication edges.
I don't think it's a strong hiring signal if they weren't already familiar with APIs for (de)serialization in between, because if they're worth anything then they'll just pick that up from documentation and be done with it.
Don't get me started on how you need to move "blocking" work to separate thread pools, including any work that has the potential to take some CPU time, not even necessarily IO. I get it, but it's another significant papercut, and your tail latency can be destroyed if you missed even one CPU-bound algorithm.
These may have been the right choices for Rust specifically, but they impair quality of life way too much in the course of normal work. A few years ago, I had hope this would all trend down, but instead it seems to have asymptoted to a miserable plateau.
reply