you’re talking to the core of the issue. In no other language did they try to satisfy the case of running on an embedded system vs general purpose computing. Async rust tried to, and came up with a solution that is not great for the majority of programmers writing rust.
I wish to God that the rust library devs would admit to this fact - say that async rust should stay for embedded runtimes usecases, but we shouldn’t be forcing async across the majority of general purpose computing libraries. It’s just not a pleasant experience to write nor read. And it really doesn’t give any performance benefits.
I write reams of async rust for a living, and completely disagree with this characterization. The concurrency primitives in the futures crate are able to elegantly model the large majority of places where concurrency is needed, and they are nicely compostable.
More than once, we have wanted to improve the performance of some path and been able to lift the sequential model into a stream, evaluated concurrently with some max buffer size. From there, converting to true parallel execution is just a matter of wrapping the looped futures in Tasks.
Obviously just sprinkling an async on it isn’t going to make anything faster (it just converts your function into a state-machine generator that then needs to be driven to completion). But being able to easily and progressively move code from sequential to concurrent to parallel execution makes for significant performance gains.
I completely disagree. Having to make sure every little function is Send + Sync + lifetime even if it doesn't need it is fucking hell. writing concurrent code with plain kernel threads is so much easier to write and read.
If you just want to build a normal backend service, you can't escape async libraries. Wrapping the async functions with `block_on` is not ideal I'd rather just have access to standard sync primitives that don't need me to bring an entire async runtime into the system.
My ultimate point is - I would be happy if async stayed in its own world. But the fact is async has completely polluted the rust library landscape and you can't escape it. I'm working on a project that I hope to show rust users that async isn't needed for performant backend services, and that the code can be written much simpler without it.
You don’t have to make everything Send/Sync if you don’t need to. Use tokio’s local runtime and spawn_local(), or use one of the other async runtimes.
You also don’t need to spawn() futures to await them. Spawn enables parallelism on the multithreaded runtime, holding join handles, etc. If all you need is to execute concurrent code, though, the various combinators and functions in the futures crate lets you do so without having hard requirements on Send/Sync. The large majority of the concurrent code I write uses nothing specific from the tokio crate, including spawn.
As is often the case in rust, the compiler is also telling you the correct thing. If you’re using the multithreaded runtime and spawning, your code may execute in another thread, so it has to be Send/Sync, and since the ownership of the future is transferred to the executor, it must also be ‘static.
You literally can't use one of the other async run times because of the current state of async/await does not allow library authors to easily write for multiple runtimes - they were written for one runtime in mind and that is just tokio. And if you're pulling in library methods you're still stuck with the method headers they specify.
All of your arguments are just mental workarounds trying to justify how fucked the rust ecosystem is for traditional backend services.
The project I'm working on is specific to making traditional kernel threads faster (150-200 nanosecond context switches compared to 1500-2000 nano seconds for normal kernel threads). It requires a user scheduler but you can swap those out without any changes to how you write rust. In my testing, it's not only faster than async rust but also much easier to write. I hope it convinces people like you, that are hell-bent on defending the current state of async rust, that there are better paradigms and we don't have to be locked in to shitty, verbose concurrent code.
You’re moving the goalposts, and seem to have a vested interest in this that I, frankly, don’t.
Send/Sync/static is not needed on tokio’s local runtime, which doesn’t require any adjustments to your libraries.
Passing data between threads requires Send/Sync/static, except for certain cases like scoped threads, so making OS threads faster doesn’t seem to solve that issue like using a local runtime would.
Many async libraries (though certainly not all) are runtime-independent. If your library doesn’t have to spawn, it is easy to write runtime-independent code. I would like to see some spawn traits brought into std to make it easier to write libraries that have to spawn, though.
I’ll always try new ways of doing things, but you are making the assumption that the way you feel is the way everyone feels, and totally dismissing the opinions of those who don’t. It puts me off of whatever solution you might be proposing, since you clearly don’t have the empathy to understand the full range of positions of the people whose problems you’re ostensibly trying to solve.
I’m not trying to convince you the way you feel is wrong, but you are wrong that everyone thinks writing async code is miserable. There are times where it’s hard, or where the compiler emits confusing messages about async closures being not generic enough, but on the whole I enjoy writing async rust, so shoot me.
I haven’t moved any goal posts - I’ve coded async rust and is miserable compared to normal rust with threads. That has been my point which is why I started down this project.
My entire goal is to show that coding the same server with pre-async hyper vs post async hyper is nicer and more performant than async rust. I hope to show you it in just a few days.
I have some test code that runs a comparison of Hyper pre-async (aka thread per request) vs async (via Tokio), and the pre-async version is able to process more requests per second in every scenario (I/o, CPU complex tasks, shared memory).
I'll publish my results shortly. I did these as baselines because I'm testing finishing the User Managed Concurrency Groups proposal to the linux kernel which is an extension to provide faster kernel threads (which beat both of them)
The UMCG implementation allows kernel thread context switches to happen in 150-200 microseconds, compared to the 1500-2000 microseconds for normal kernel thread context switches. My goal is to show that if UMCG could be merged into the Linux run time then then it would be competitive with async rust without the headache.
I'll have to check my work computer on Monday. It was 8 cpu virtual machine on a m1 Mac. the UMCG and normal threads were 1024 set on the server, the Tokio version was 2 threads per core. Just from the top of my head - the I/O bound requests topped out around 40k/second for the Tokio version, 60k/second for the normal hyper version, and 80k/second for the UMCG hyper version.
I'm pretty close to being done - I'm hoping to publish the entire GitHub repository with tests for the community to validate by next week.
UMCG is essentially an open source version of Google Fibers, which is their internal extension to the linux core for "light weight" threads. It requires you to build a user space scheduler, but that allows you to create different types of schedulers. I can not remember which scheduler showed ^ results but I have at least 6 different UMCG schedulers I was testing.
So essentially you get the benefits of something like tokio where you can have different types of schedulers optimized for different use cases, but the power of kernel threads which means easy cancellation, easy programming (at least in rust). It's still a linux thread with an entire 8mb(?) stack size, but from my testing it's far faster than what Tokio can provide, without the headache of async/await programming.
Java 21 is pretty damn nice, 25 will be even nicer.
For your own application code, you don't have to use exceptions you can write custom Result objects and force callers to pattern match on the types (and you can always wrap library/std exceptions in that result type).
Structured Concurrency looks like a banger of a feature - it's what CompletableFuture should've been.
VirtualThreads still needs a few more years for most production cases imo, but once it's there, I truly don't see a point to choose Go over Java for backend web services.
And Java has non-trivial advantage over Go of being arch-independent. So one can just run and debug on Mac Arm the same deployment artifact that runs on x86 server.
Plus these days Java GC has addressed most of the problems that plagued Java on backend for years. The memory usage is still higher than with Go simply because more dynamic allocations happens due to the nature of the language, but GC pauses are no longer a significant problem. And if they do, switching to Go would not help. One needs non-GC language then.
If you're building tools that need to be deployed to machines, Go/Rust with their static binaries make a lot of sense. But for backend web services, it's hard not to go with Java imo.
fwiw - My favorite language is Rust, but Async Rust has ruined it for me.
Yeah, async Rust is needlessly difficult. I can't quite put my finger on it but having to sift through 10+ crates docs definitely left a very sour taste when I had to modernize one tokio 0.1 app to a 1.x one.
I do love Rust a lot as well but most of the time I am finding myself using either Elixir or Golang.
There's an attempt to make linux kernel threads context switching much faster (150-200us vs 2500-3000us) and if that happens, I'll really hope the rust community pivots from Async for backend development to normal threading. And if that happens, I'll happily use Rust like I used to.
My company is trying to force Kotlin as the default, but I just prefer modern Java tbh. Kotlin is a very nice language, and I'd be fine with writing it, but modern Java just seems like it has "caught up" and even surpassed Kotlin in some features lately.
I'm a researcher in this area if anyone wants to have a proper conversation. I'm 95% convinced that the major powers of the world have recovered crashed/landed non-human intelligence craft.
Before anyone responds to this comment, I would urge you to watch this video of Majority Leader Schumer and rising republican leader Mike Rounds giving a soliloquy on the senate floor to try to pass their UAP Disclosure Act and ask yourself - why would two senate leaders put their credibility on the line to try to pass a bill that references non-human intelligence 21 times?
> NON-HUMAN INTELLIGENCE.—The term
‘‘non-human intelligence’’ means any sen-
tient intelligent non-human lifeform regard-
less of nature or ultimate origin that may be
presumed responsible for unidentified anom-
alous phenomena or of which
What part of the passage confirms GP's (who's GP?) statement? Again, I can't paste the entire 64 page bill in a hacker news comment. You have to do some effort to read the bill to figure out what they're talking about.
My guess is they found a room-temperature superconductor that can store incredible amounts of electrical energy, and the quantum drive from https://ivolimited.us/ actually works.
You really think the Senate Majority Leader would risk his reputation on a 60 page bill that references "non-human intelligence" 20+ times a misdirection? The only misdirection I see is your ignorance
I'm not going to stop what I'm doing because some closed-minded person told me to.
These politicians are not experts, but the witnesses that have testified under oath are.
Lue Elizondo - GS15 officer at the DIA, who's last assignment was running Special Access Program for the National Security Council.
David Grusch - GS15 officer at NRO then NGA who handled the Presidential Daily Briefing for the NGA (meaning he was cleared to thousands of SAPs to consolidate information and brief the president)
I don't disagree that congress gets lost in conspiracy theories, but almost never do people pandering those testify under oath in public, or privately to the intelligence committees or ICIG. You should open your mind and take a deeper look than the headlines.
The universe is 13.8 billion years old. The JWST has found galaxies formed just 400 million years after the Big Bang. It took the earth 4.6 billion years to form, and 1 billion years to create life.
Our best theory of the universe, General Relativity, has solutions that allow for faster-than-light travel via some Alcubierre type drive (some [without negative mass](https://arxiv.org/abs/2405.02709)), and even wormholes.
Even with newtonian mechanic style solutions, it's estimated that it would only take Von Neumann self-replicating probes about 100k years to traverse our own galaxy.
Is it really that unlikely that some non-human intelligence potentially *billions* of years more advanced than us found our planet and uses it to study us or for whatever other purposes to them? IMO no.
And if you would do any research beyond just the headlines, you might come to the realization that our probably has government has recovered crashed/landed NHI craft.
I'm happy to point you to some not-so-light reading if you want to have a real conversation about this.
A civilisation billions of years more advanced than us should surely be putting out some sort of signal, intentionally or not, that they exist, right? And yet we've never seen any kind of signal like that. Space is incomprehensibly large and empty, and while I also absolutely believe that there is some other kind of life in the universe, the fact that we've never detected any means they've probably never detected us either, so there's no reason they would be coming here
> the fact that we've never detected any means they've probably never detected us either
I do not think that your conclusion necessarily follows. They could easily be much more advanced than us, just as we are much more advanced than we were 500 years ago.
I am just saying that there could be a more advanced whatever you may call it that could detect us while we cannot. Much more advanced to us than we are today.
> Is it really that unlikely that some non-human intelligence potentially billions of years more advanced than us
I think there's a fine-tuning problem with that kind of argument. You have to believe all the steps that make intelligent space-faring life elsewhere come into being, and yet only one other manages to do so. If there were more than one, then it seems to me the same argument about government conspiracies would apply: every single one of them would have to work very hard to hide any evidence of their existence - not just from our mobile phones, but our most advanced telescopes.
I think if there were others close enough to matter, we'd have been colonised by an errant Von Neumann machine by now.
> You have to believe all the steps that make intelligent space-faring life elsewhere come into being
I would argue that it's very selfish thinking we would be the only intelligent life in the universe, let alone our own galaxy. If you're throwing out the entire argument based on that presumption, then this conversation is pointless.
Science is about optimism to learn what we don't know. There are things in the sky that we can't identify, as Obama has said https://www.youtube.com/watch?v=u1hNYs55sqs We should be open to the idea of all possibilities, rather than dousing fire on it.
> I think if there were others close enough to matter, we'd have been colonised by an errant Von Neumann machine by now.
As long as were exploring wild theories, how about this one: Aliens smarter than us have colonized earth and are manipulating our information sources so we are not aware. Further, every once in a while they "give" us discoveries to advance our technology in a controlled manner. Perhaps we're captives in what is equivalent to a zoo.
I find that more pleasant to imagine than aggressive/hostile aliens that could probably destroy our civilization in about 17 seconds, though that would have the benefit of solving all of our problems in one fell swoop.
I already thought of that one - bacteria and fungi are their compute substrate for their immaterial cities. (ref: diaspora by Greg Egan, and surface detail by Ian M. Banks).
>it seems to me the same argument about government conspiracies would apply: every single one of them would have to work very hard to hide any evidence of their existence - not just from our mobile phones, but our most advanced telescopes.
I'm posting this because it's crazy to me how the link regarding Jimmy Carter's UFO encounter gets upvoted in HackerNews, but not David Grursch's claims. Grusch was a former NRO/NGA intel officer. He testified under oath to the Intelligence Community Inspector Generals, both Senate/House Intel Committees, and a public House Oversight Committee under oath that the USG has recovered crashed/landed non-human intelligence craft. The ICIG referred Grusch's claims to the intel committees as being "credible" and "urgent".
David Grusch is not some random whistleblower we should be ignoring. He was a GS-15 intel officer read into over 2000 special access programs. He handled the presidential daily briefing, which they do not give to just anyone. Take a look at his resume to see how highly cleared he was https://docs.house.gov/meetings/GO/GO06/20230726/116282/HHRG....
And let me just clarify - this is not just one person's claims. His work in the UAP Task Force had 40 people with direct, first hand knowledge of the programs. Some of whom worked on the non-human intelligence (NHI) craft. He had these people testify to the ICIG, providing documentation, imagery, and other evidence.
Listen, I know this sounds insane to most folks. The meat of his claims, beyond the craft, are that factions of the USG have not been properly giving congress(and even some presidents) oversight of these alleged Special Access Programs. The ICIG has most likely referred this case to the justice department, and his claims have started a congressional UAP Caucus in the house.
Take a look at this interview with Marco Rubio, the ranking member of the Senate Intel Committee, as he's talking about the 40 whistleblower's claims https://www.youtube.com/watch?v=m4hmaflNoKU
If you haven't watched the HOC hearing, or any of his other interviews, I highly recommend you do so. Or at the very least, read his opening statement he was giving to the HOC
I'm not saying the claims are true. I'm just saying, the allegations are worth investigating and should not be dismissed outright.
One final note. After Grusch's claims released, Senate Majority Leader Chuck Schumer introduced an amendment to the NDAA 2024 titled "UAP Disclosure Act" which would've set up a presidential panel to declassify and release information that the USG has on the subject. It references the terminology "non-human intelligence" 27 times. The amendment was gutted by certain house members and unfortunately did not make it through in its initial form, but it's worth the read. https://www.democrats.senate.gov/imo/media/doc/uap_amendment....
There's strict protocols for aircraft flying near the president's aircraft. There's no way in hell the airbase would allow take off while the president is in the air and he wouldn't have been told about it.
I couldn’t even take a video of an enormous search & rescue flare on a chute less than 1 km away at night and have it show up. If that’s legit, those lights must have been blinding
This shape specifically (black triangle with white lights in each corner and an independent round red ball in the middle) has been reported numerous times over the decades, most prominently presented in the Belgian UFO wave 1989-1991 [1][2], including F-16 chases.
I wish to God that the rust library devs would admit to this fact - say that async rust should stay for embedded runtimes usecases, but we shouldn’t be forcing async across the majority of general purpose computing libraries. It’s just not a pleasant experience to write nor read. And it really doesn’t give any performance benefits.
reply