- Caught multiple memory safety issues in a nice deterministic way, so designing the object model was easier than it would have been otherwise.
- C++ with accurate GC is a really great programming model. I feel like it speeds me up by 1.5x relative to normal C++, and maybe like 1.2x relative to other GC’d languages (because C++’s APIs are so rich and the lambdas/templates and class system is so mature).
But I’m biased in multiple ways
- I made Fil-C++
- I’ve been programming in C++ for like 35ish years now
Are you using malloc + GC in preference to smart pointers, and if so why? I thought Fil-C was just C not C++?
It doesn't seem like that is necessarily a performance win, especially since you could always use a smart pointer's raw pointer (preferably const) in a performance critical path.
I’m curious. Given the overheads of Fil-C++, does it actually make sense to use it for greenfield projects? I like that Fil-C fills a gap in securing old legacy codebases, I’m just not sure I understand it for greenfield projects like this other than you happen to know C++ really well.
It made sense because I was able to move very quickly, and once perf became a problem I could move to Yolo-C++ without a full rewrite.
> happen to know C++ really well
That’s my bias yeah. But C++ is good for more than just perf. If you need access to low level APIs, or libraries that happen to be exposed as C/C++ API, or you need good support for dynamic linking and separate compilation - then C++ (or C) are a great choice
Hmmm… I did about 20+ years of C++ coding and since I’ve been doing Rust I haven’t seen any of these issues. It has trivial integrations with c/c++ libraries (often with wrappers already written), often better native libraries to substitute those c++ deps wholesale, and separate compilation out of the box. It has dynamic linking if you really need it via the C ABI or even rlib although I’ll grants the latter is not as mature.
The syntax and ownership rules can take some getting used to but after doing it I start to wonder how I ever enjoyed the masochism of the rule of 5 magic incantation that no one else ever followed and writing the class definition twice. + the language gaining complexity constantly without ever paying back tech debt or solving real problems.
I'm not trying to convince you to stay (I work for neither anymore!), just wanted to note that you can technically request a waiver. I'm not sure how this works in practice though. Like, if you want to leave Athena and move to something on-premise is that enough to have just that workload? Maybe!
Edit: I also didn't follow this at the time, but the AWS wording suggests that the "EU Data Act" is also involved.
This doesn't actually work as advertised. I attempted free data egress from AWS in December. It took them 31 days to respond to my initial ticket. At which point they gave me a multi-page questionnaire to determine eligibility and they also told me I could not begin DTO until 60 days had passed from approval of the questionnaire.
By the time I was allowed "free egress" my cumulative S3 storage charges over the prior 100 days would have roughly matched the cost of egress if I just did so originally.
I'm in the US so the EU Data Act protections don't apply.
Have you tried to use the DTO? I did. They make you fill in a form saying you'll migrate all services (despite the blog post saying that isn't necessary), and then they take up to 12 weeks to make a decision. In my case they rejected it on a formality after 2 weeks and said to try again (the timer starts again).
So in my case that would have been 14 weeks plus the time to migrate away. The egress costs are equivalent to around 17 weeks storage cost. So you save around 1c/gb if they don't find some reason to reject it.
The EU Data Act forbids cloud switching charges, that's why they made these changes (while presenting them as if they cared about customers being charged for switching away):
Yeah, but that missing context is super important.
If they want it for local dev work, that's pretty different from wanting a high-performance air gapped object store without rewriting clients.
They seem to know what they're doing (having complained about a methodology problem in MinIO), and yet don't personally want to throw their hat in the ring not maybe pay anyone...
"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
Ooh, that's a worthy challenge. Of course, I can imagine getting enough data on all of those cities and deciding to launch everywhere else but not Boston "because your roads are garbage and you all drive like you're impaired 24/7" :-)
That's not how you should measure "worth". In that world, you'd have a P/E ratio of 1. Comparing to a bond, it would be like expecting to get paid the face amount in a single year. Many people are quite happy with 5-10% interest as a risky benchmark, so 10-20 P/E isn't wild. That puts the market cap for tech itself at 10-20T as a reasonable baseline.
reply