Hacker Newsnew | past | comments | ask | show | jobs | submit | userbinator's commentslogin

One who has a true mastery of programming should be able to write any program in any language, or at least see how to do so, because one thinks in terms more abstract than language-specific constructs yet is able to map them to any language.

Relatedly, here's TLS 1.3 in VB6: https://news.ycombinator.com/item?id=35882985


What's interesting about it precisely that despite the equivalence (Turing machines and all that) the difficulty to do one thing is a language versus another is what make it prohibitively hard.

If one is "just" translating something that is idiomatic to a language to something that is not in another, it might take 100x for lines for codes in a way that is terribly hard to understand.


I've had the unfortunate exercise of copying an API from pure android java to typescript because "they are similar". It goes against best practices and creates all kinds of weirdness resulting in a finished product which is almost the same but far enough away you could just as well have designed a better API in a few hours from scratch (the real complexity is in the backend of the code, not the interface fortunately). But requirements must be met.

Funnily the conclusion is precisely why the whole idea of "just" giving specs and let <insert here trendiest marketing slang which tries to replace actual experts, developers or not> do the work is risky. If nobody challenges requirements from a manager or client who "just to get it done" you end up with actual work done... that is totally inefficient, potentially dangerous or just irrelevant.

PS: the <> used here could be the next HN thread I opened, https://news.ycombinator.com/item?id=48047826


But you still need to know the language specific constructs.

I can't magically speak German because I know a load of abstract language theory.

And this can work the opposite way. If I know small talk, and then read that C++ can do OO code, I could then think about writing X in C++ and OO being the best model. But I then hit problems when the limitations of OO in C++ become apparent.

In my experience, programming in a language is finding what works in that language, not trying to make the language fit what is in your head.


That's true, but the point the GP was getting at is, if you want to make a program that does xyz, you can do that in any complete language provided you have the mathematical reasoning to figure out how to make it happen. Whether you do that in an object-oriented manner because the language you're using makes that easy, or with some other paradigm.

Programming in a language is finding what works in that language, you're right, but programming at all is trying to get what's in your head into the computer.


I suppose it depends how we define 'true mastery'

Yes you have to understand the problem and state the solution, and that is a prerequisite to 'mastery' but I don't think that's everything that's required.

And this is just talking about the algorithm level. Is Ed equivalent to Emacs? They both do the same thing. Is a complicated solution that demonstrates your mastery better than a simple solution?


Who else expected the "certain elements in the shot to be out of focus" link to lead to https://en.wikipedia.org/wiki/Bokeh ?

If you don't want any frame stacking, you'd need to use a dedicated camera instead of a smartphone, because a smartphone without HDR isn't viable.

I have an old Android with a 13MP camera (Sony IMX214, 1/3.06") that leaves HDR off by default. I haven't had a need to turn HDR on except if I'm trying to photograph something with regions of extreme contrast.


It wasn't until fairly recently

By "recently" you mean Win95? MSVCRT.DLL has been there for at least that long.



I believe that it technically belongs to Visual C++, not the operating system, but it needs to ship with the OS because the user space binaries are compiled with MSVC.

It's both. Originally Visual C++ binaries built for DLL-based C runtime relied on MSVCRT.DLL and that was installed by the redist. Starting with Visual Studio .NET 2002, separate CRT DLLs starting with MSVCR70.DLL were used. MSVCRT.DLL is now part of Windows to support parts of the OS itself and for compatibility with programs that still use it. I think some versions of MinGW also use MSVCRT.

Current versions of the OS ship with functions in MSVCRT.DLL that weren't in the last VC6 version, such as the updated C++ exception handler (__CxxFrameHandler4). AFAIK, there is no redistributable version of it, it's unique to the OS.


That is for backwards compatibility, the now finally official C standard library that is distributed with the OS, since Windows 10, is UCRT (Universal C Runtime).

It was there but mystery meat vs whatever version you might need for your binary.

They are backwards-compatible. I've written many tiny (few KB) utilities that work from Win95 through Win11, and of course WINE.

Your only option is to sway the "dumb majority" in the other direction.

I'd rather have no ID verification at all. Give them an inch and they'll take a mile.

Same, I've never seen any app or website where an ID registration would make sense. No thanks.

What accounts for the premium price/TB of these extremely high capacity enterprise-targeted drives?

The word "enterprise".


QLC NAND

The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.

At scale, 1.9 times more energy is required for an HDD deployment

...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.

On the other hand, 60TB of SLC for the same price would probably be a great deal.


Perhaps their usual buyers just care less about retention?

Those drives aren't going to be used for cold storage, and it is basically a guarantee that there will be checksums and some form of redundancy. Who cares whether the data is retained for 10 or for 15 years after writing when you can do a low-priority background scrub of the entire drive once a month, and when there are already mechanisms in place to account for full-drive failure?


QLC retention reported to be around 1 year in unpowered state. I would assume, that drive does background refresh, though. No idea what effect it has on total drive lifetime. It is still mean that if you use it for cold storage it has to be powered.

A drive's write endurance rating is derived at least in part from the JEDEC standard data retention requirements: 1 year at 30C for consumer drives, 3 months at 40C for enterprise drives, IIRC. Thus, a drive that has reached the end of its rated write endurance can be expected to have those retention characteristics. A drive that hasn't been subjected to that much wear will have significantly longer retention.

Why is it mean? Why would you want to use a technology that is unsuitable for cold storage for cold storage? You won't even get the power / IOPS benefit if all it does is an infrequent replication of data and is then switched off.

What kind of usage do you envision for 245TB drive with read speed of 3GB/sec?

I believe it has read speeds of 13GB/s, not 3 (unless you are referring to an equivalent array of 10 HDD). It will almost certainly be used to store training datasets and model weights. Which I assume are good use cases for fast sequential reads.

Right, I misread the spec.

This is an ad, not a spec sheet. The vast majority of the people buying this understand the endurance and retention characteristics of this type of device. It isn't going on Amazon or Best Buy, and the target market knows how to ask the right questions.

You're right. Facebook / Meta has advocated for QLC SSDs https://engineering.fb.com/2025/03/04/data-center-engineerin...

You can trivially modulate flash endurance by tweaking the reported space - the less space you report, the more spares you have.

Conspiracy theory: making the browser bigger makes it harder to run large quantities of headless versions, for all the useful (but anti-Google) things that enables. I suspect this is directly tied to the ongoing ascent of verification laws and other pieces of the drive towards authoritarian dystopia. They're basically DDoS'ing providers of browser-VM services with this.

Not too long ago, someone submitted an AI demo to HN that resulted in a 3.1GB download upon visiting the page: https://news.ycombinator.com/item?id=47823460

It reminds me of the "dialup warnings" common 2 decades ago on huge pages (often containing many images). Yes, bandwidth and storage has gotten cheaper, but the unwanted waste should still be called out. I'm not even anti-AI, having waited several hours recently to get some local models to experiment with, but that's because I wanted to and made the decision to use that bandwidth.


Or tune the engine correctly. Probably has an off-the-shelf "performance" carb that's set much richer than it should and a "full race" cam that only makes sense for a track car, giving horrible fuel economy and actually less low-end power.

My daily driver is roughly as old, has a 400 V8 with a 4-barrel, idles so quietly I've had passengers surprised that the engine was running, and gets around 20-25mpg if I resist the urge to open it up all the way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: