Hacker Newsnew | past | comments | ask | show | jobs | submit | stabbles's commentslogin

After yesterday's outage they admitted that their elasticsearch index for issues/prs lost data.

They seem to have changed the primary source of data in the issues and pull requests tabs (w/o filters applied) from the underlying database to the elasticsearch search index, which has the side effect that there's a noticeable delay between state change of an issue/pr and an update in the UI. But as seen today, these can get out of sync, and apparently they even had data loss in the index.

I would really like to know their reasoning for making that change. I can totally imagine that they wanted to "simplify" so the UI uses only a single data source instead of two.

As a user it's incredibly annoying to have a delay between issue/pr state changes and the search index picking it up.


Yeah, I have been noticing weird things with Issues and PRs, including outdated state, for months now.

When the outage happened yesterday I sort of figured it was something I had been noticing building up or something.


Is "migration to azure" or "microsoft acquisition" a cause or a symptom?

I'm wondering to what extent the natural life cycle of SaaS products comes down to: the company grows, the old guard with good technical taste move on, bad technical decisions are made, quality declines, users move on.


I was surprised to find that this sentence

> Plan prices aren’t changing

did not continue with an em-dash followed by something profound that is changing.

Plan prices aren't changing -- the value you get out of it is.


Other than TPUs they're also planning for 960,000 Rubin GPUs [1] which can do 33 teraflops fp64 each, so over 30 classical exaflops, and with emulation it could be more than 100 exaflops.

[1] https://blogs.nvidia.com/blog/google-cloud-agentic-physical-...


Another one I noticed is "or maybe I hallucinated that" instead of "or maybe I dreamed that". Researchers will be horrified to learn that even talk about LLMs affects people's vocabulary.


Oh no, LLMs threaten our individuality ⸻ what will we do?!


Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".


I've started letting some run on sentences remain because it feels closer to how humans think and usually write. Letting typos go seems silly though.


Definitely think it is. It will be glorious. We will focus more on content than on just aesthetic as people try to signal that they are not llm


I feel like having to signal that you're a human detracts from the content side of things. Proper spelling and grammar, good style etc. are there to help you convey your ideas more accurately. Resorting to a stream of consciousness style of unrefined writing makes it apparent that you're a human, but the downside is that your text is bad.


Style is entirely subjective, and not every text is looking for a refined reader.


Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)


I see loads of LLM articles where it's been prompted to never capitalise, avoid full stops, pepper in spelling mistakes, etc. it sucks.


Flaw become aesthetic all time. People faked butt bandage follow Sun King fashion. Ugly as sin, still aesthetic.


lol


Kurzgesagt typically does STEM focused videos... they've got a new channel "After Dark" which focuses on history and historical figures. Their first one: Kurzgesagt After Dark The Final Days of Louis XIV - https://youtu.be/bIwX4QuL90k?si=9WLbzKqxo08KCDum&t=564

> And though the operation was done in secret, a new fashion sweeps the court: Bandages wrapped around everyone’s buttocks.


When writing letters of recommendation now, I write in a more human tone to avoid sounding like a bot with a line of explanation at the start. Not an error in the sense you mean, but an error in tone for a letter of recommendation, certainly.


I don't know but capitalisation seems to have gone down the shitter.


Maybe it is.

Just like hand made items are popular for their imperfections.


An awful lot of stuff in the "hand made" aesthetic are made by machine and factory too, and I suspect a similar thing will happen to any popular writing aesthetic that attempts to avoid being automated away.

Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.

It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.

But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.



I mean yes? I am more likely to read and trust something that is not written or cowritten by ai.

I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion


My experience with RISC-V so far is that the chips are not much faster than QEMU emulation. In other words, it's very slow.


That has been the case so far but is changing this year.

The SpacemiT K3 is faster than QEMU. Much faster chips are expected to release over the next few months.

I mean things like the Milk-V Pioneer were already faster but expensive.

One thing that has been frustrating about RISC-V is that many companies close to releasing decent chips have been bought and then those chips never appear (Ventana, Rivos, etc). That and US sanctions (eg. Sophgo SG2380).


One thing I observed is that RVV code is usually slower in QEMU


Of course it is. Emulating parallel operations on 4 or 8 or 16 or 32 elements one at a time using scalar instructions is expected to be slow.


I've added it, to one of my repos, and yes, it's slower than using emulation.

Particularly for my use case, Go cross compilation, QEMU and binfmt work really well together.

Still, for some things, it's nice to test on actual hardware.

Here's a workflow so you can see both approaches working: https://github.com/ncruces/wasm2go/blob/main/.github/workflo...


The arrival of the first RVA23 chips, which is expected next month, will change the status quo.

Besides RVA23 compliance, these are dramatically faster than earlier chips, enough for most people's everyday computing needs i.e. web browsing, video decoding and such. K3 got close to rpi5 per-core performance, but with more cores, better peripherals, and 32GB RAM possible, although unfortunately current RAM prices are no good.

And it'll only get better from there, as other, much faster, RVA23 chips like Tenstorrent Alastor ship later this year.


s/Alastor/Atlantis/g.

Alastor is something else; a core from Tenstorrent that is considerably smaller than Ascalon.


Oftentimes slow is fine, when the work is parallel and the hardware is cheap


RISC-V microcontrollers are inexpensive but “application” processors will be expensive until volumes increase.

Performance will get “good enough” over the next 2 years. Prices will drop after that.


I should have replied differently.

“Good enough” here was meant to mean good enough to sell more, and therefore to drop prices.

That is already happening. It just needs to happen more. And I think it will. If you don’t find the RISC-V boards of 24 months from now “good enough”, that is ok with me. I just want them to get cheaper.

The other thing that is happening on that front is that microcontrollers are getting more powerful and staying inexpensive. You can get RISC-V microcontrollers today with similar performance to the original Raspberry Pi and with things like WiFi, Bluetooth, and USB. They are crazy cheap and there are many projects for which they are now “good enough”. And, of course, they keep getting better.


That the "good enough" SoCs will be arriving "over the next 2 years" is what the RISC-V advocates have told us for quite a few years now.


Well, part of “good enough” is features. The RVA23 profile was ratified a few months ago and the first chips are appearing now. That brings RISC-V to feature parity with X86-64 and ARM, including things like vector instructions and virtualization. QUbuntu 26.04 is compiled to require RVA23. So, the RISC-V advocates got that part right. Of course, the other side of “good enough” is performance.

The SpacemiT K3 has the multi-core performance of a 2019 MacBook Air and higher AI performance than an M4. That is better multi-core than an RK3588. If it were less expensive, the K3 would already be good enough for many people.

Alibaba has the C930 which is faster than the K3. We will see if it gets released to the rest of us.

Tenstorrent will release a chip in a few months that is twice as fast as the K3.

The recently announced C950 is supposed to be even faster but will be a year or more.

Of course, “good enough” is subjective but my statement was based on the above.

But you are right that there have been some false starts.

The SG2380 was just as fast as K3 and was ready to go two years ago. TSMC refused to manufacture it over US sanctions.

Ventana was about to release a very fast RISC-V chip but Qualcomm bought them.

Rivos was very close to releasing a RISC-V GPU but Meta bought them.

But even without these high-end chips, RISC-V is enjoying great success. It is taking over the microcontroller space. And billions of RISC-V cores are shipping.


> The RVA23 profile was ratified a few months ago

If you're like me, you're suffering the typical time dilation that comes with getting old.

For everybody else, this was 18 months ago.


which, sadly, isnt the case right now


It is the case for embedded microcontrollers. An ESP32-C series is about as cheap as you can get a WiFi controller, and it includes one or more RISC-V cores that can run custom software. The Raspberry Pi Pico and Milk-V Duo are both a few dollars and include both ARM and RISC-V view. with all but the cheapest Duo able to run Linux.


All Duos run Linux.


Some of that could be related to the ISA but I'm hoping that it's just the fact that the current implementations aren't mature enough.

The vast majority of the ecosystem seems to be focused on uCs until very recently. So it'll take time for the applications processors to be competitive.


The RISC-V ISA can be fast.

Tenstorrent Ascalon, expected later this year, is expected to be AMD Ryzen 5 speeds. Tenstorrent hopes to achieve Apple Silicon speeds in a few years.

The SpacemiT K3 is about half as fast as Ascalon and available in April. K3 is 3-4 times faster than the K1 (previous generation).

This should give you an idea about how fast RISC-V is improving.


I'd be pretty surprised if Ascalon actually hits Zen 5 perf (I'm gessing more like Zen2/3 for most real world workloads). CPU design is really hard, and no one makes a perfect CPU in their first real generation with customers. Tenstorrent has a good team, but even the "simple" things like compilers won't be ready to give them peak performance for a few years.


>I'd be pretty surprised if Ascalon actually hits Zen 5 perf

Certainly not in the Atlantis SoC, due to the older fab node used. Zen2-3 territory IPC is the expectation, with lower clocks than these actually got.

By the time they have the necessary scale to use the best fabs, they'll be tapping out something newer than the Ascalon that went into Atlantis.

Tenstorrent expects to reach parity with the best x86 and arm chips by 2028.


All RISC ISAs are basically the same thing as far as compiler optimisation is concerned, and there is 40 years of work into that already.

I can't see any reason why the father of Zen and the designer of the M1 can't make a core for the simpler RISC-V ISA with basically the same (or better) µarch than the M1.


Assuming AMD, Intel, ARM, Apple in a few years haven't released new CPUs, otherwise the difference is the same as today.


Same experience here.

At least for SBCs, I’ve bought a few orange pi rv2s and r2s to use as builder nodes, and in some cases they are slower than the same thing running in qemu w/buildx or just qemu


You're glancing over the fact that mathematics uses only one token per variable `x = ...`, whereas software engineering best practices demand an excessive number of tokens per variable for clarity.


It's also a pretty silly thing to say difficulty = tokens. We all know line counts don't tell you much, and it shows in their own example.

Even if you did have Math-like tokenisation, refactoring a thousand lines of "X=..." to "Y=..." isnt a difficult problem even though it would be at least a thousand tokens. And if you could come up with E=mc^2 in a thousand tokens, does not make the two tasks remotely comparable difficulty.


The other day someone commented on this site that in the age of agentic coding "maintaining a fork is really not that serious of and endeavor anymore." and that's probably the case. I'm sure continuously rebasing "revert birthday field" can be fully automated.

Then the only thing remaining is convincing a critical mass that development now happens over at `Jeffrey-Sardina/systemd` on GitHub.


IMO, the benefits aren't from getting mass adoption of this fork, but actually the opposite, at least ostensibly, because if it were to become "the" systemd, it would then face scrutiny and potential legal threat. This way, the maintainers can be in compliance, the legislators (who if there are any paying attention) can be superficially satisfied, while people can still avoid the antipattern. It's the "brown paper bag" speech from the Wire, basically


At some point people will realize that not having an optional data field might not be worth the effort of indefinitely rebasing a revert and recompiling, since they could just not set the field for their user account by doing nothing


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: