Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Enabling new business models

This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].

> Extensibility powers technology innovation

>> While this flexibility could cause problems for the software ecosystem...

"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.

> How mature is the software ecosystem?

10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.

The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.

I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.

[1]: https://www.reuters.com/business/anthropic-weighs-building-i...

 help



> good luck parsing through 100 different "performance optimization manuals" from 100 different companies.

Imo this is pretty misguided. If you're writing above assembly level, you can read the performance optimization manual for Intel, and that code will also be really fast on AMD (or even apple/graviton). At the assembly level, compilers need to know a little bit more, but mostly those are small details and if they get roughly the right metrics, the code they produce is pretty good.


> good luck parsing through 100 different "performance optimization manuals" from 100 different companies

This would be a problem for any ISA with multiple/many vendors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: