Hacker Newsnew | past | comments | ask | show | jobs | submit | jmalicki's commentslogin

Same for SciPy (At least the last time I dove into it around 10 years ago).

A lot of the C code you see for numerics is a straight up f2c run checked in.


SQS is particularly hard, there are other options, but it has less direct competitors than S3

Agreed - I use AWS at work and try and keep the services we use to a minimum. S3 and DynamoDB are ones that somewhat lock us in but the way we use them, they are replaceable (not relying on any niche features). The different queue services would definitely be harder to swap out though.

To some degree IMO big data is still a mindset when it might take a day to process your data in a normal SQL query. Some tech doesn't scale to the data size for all use cases, and you need different solutions.

> How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.

"Good judgement comes from experience. Experience comes from bad judgement."

AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.

Some do not learn from that experience. They were never going to learn without AI either.


> It's not by writing syntax that you get there.

Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.


Totally! I mean, the same could be said of painstakingly hand coding assembly language - that today's developers haven't done so is what leads us to bloated electron apps, so there is something lost!

But the larger scale system design is stronger than ever. Today, distributed systems, version control, including branching, stacked PRs etc., VMs/containers, idempotency, multimaster ACID databases, all of these things were probably never achievable in the world when the best devs had to spend their time poring over assembly language every day. Losing that skill allowed them more time to build other ones!


> It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.

I’m not sure there’s any data to support this statement. Maybe if you hedge with “not by syntax alone”.

I am aware of studies that showed the act of physical writing letters improves retention.

I would not be surprised if this extended to physically typing out your curly braces, naming functions, etc.


Why are you focusing on syntax so much? There's more to that when writing code.

That's why students learn how to write pseudo-code before picking up a programming language. Learning how to think through implementing a solution to a problem is extremely important. It's exactly this experience that helps engineers grow their scope and understand bigger, more complex system.

There's also the tactical components of using programming languages. The only way to know when to use one type of data structure over another, or to debug tricky language-specific behavior is _to actually have used that language._

And it's exactly this knowledge that's being threatened by LLMs given how they are implemented today.


Data structures are not tactical components of programming languages.

E.g. when I am writing SQL, I need to be thinking about the underlying data structures too - even though I am not specifying the execution path.


You can lead a horse to water, but you can't make it drink.

> Sure, there's a very gradual, strictly limited, tightly controlled rollout

By city and ODD, but if you are in the area and your source and destination for the route are both within the ODD they are just as available as an Uber.


At the very least you should give it a non-prod copy of the database, not direct access to the DB actively powering production right now.

I've done work for a hedge fund where the DB ran directly on the manager's desktop. I worked with my local copy and sent an update script, and he had a second copy he ran on to verify.

Even with humans you shouldn't be working directly against the prod DB in these cases!


Yes, I just think there's a sane way to do things that is not "never let LLM agents do things".

For dev/prod staging though, there's that other story on HN right now of an LLM agent that maneuvered it's way to prod credentials and destroyed prod. And backups went along with it. I'm paranoid enough to think backups in this use case means out-of-band uncorrelated storage.


There is literally no excuse. The fact that there is any resistance to this let alone from multiple people terrifies me.

I just think there's more nuance to it. Some things have an implicit RTO/RPO/SLA of say a day. Risk is also correlated to recovery and rollback. And there's levels of LLMs out there.

Surely in the Venn Diagram of things, there's a slot where it's okay let a Claude Opus agent run on a process with good backups/recovery? Where taking the risk of a 1-hour restore job is worth the LLM agent velocity?

For extra paranoia, surely even Opus/Mythos can't figure out how to destroy log level backups to immutable storage.


The only nuance I can see is, does the data matter at all? If it does you shouldn't do this. If it doesn't then who cares, also why even put it in a database.

If the cyclist was doored by an exiting passenger, would t that imply it should further block the bike lane to increase safety as it is not safe for a bike to pass while a passenger is exiting? If the car door opening is what injuries the cyclist it wasn't really in the bike line very far.

So it is equal to what neuroscientists and psychologists have proven about human beings!

How was it proven?

It does have access to its thoughts. This is literally what thinking models do. They write out thoughts to a scratch pad (which you can see!) and use that as part of the prompt.

It's important to be aware that while those "thoughts" can be a useful aid for human understanding they don't seem to reliably reflect what's going on under the hood. There are various academic papers on the matter or you can closely inspect the traces of a more logically oriented question for yourself and spot impossible inconsistencies.

It doesn’t mean that these “thoughts” influenced their final decision the way they would in humans. An LLM will tell you a lot of things it “considered” and its final output might still be completely independent of that.

Its output quite literally is not independent, as the "thinking tokens" are attended to by the attention mechanism.

They do not in fact do that. The ‘thoughts’ are not a chain of logic.

You have a fundamental misunderstanding of what the model is doing. It's not your fault though, you're buying into the advertising of how it works

Those are a funny progress bar made by a micro model , is just ui

Luckily since I met this guy named Claude most of that complexity has gone away.

A while back when the agents got hyped I was looking into the whole "give it a VM / docker container" I realized the safest and simplest option was just to give it its own machine.

Then I realized giving it root on a $3 VPS is functionally equivalent. If it blows it up, you just reset the VM.

It sounds bad but I can't see an actual difference.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: