Hacker Newsnew | past | comments | ask | show | jobs | submit | skydhash's commentslogin

I’m still very into tech, just not into tech products from the big companies. I use OpenBSD as a daily driver and I like reading its code when I want to understand how something work. This has led a renewal to my interest in Electronics, digital electronics and embedded. I also took some time to read “Operating Systems: Three easy pieces”, it’s quite nice to know that unless for performance reason, I could use decades old computers and be OK.

I don’t know where the fear of breaking changes in deps comes from, but most good projects tries to keep their API stable. Even with fast-evolving platforms like Android and iOS sdk.

It comes from trying to use Python apps you found on GitHub before uv tool install was a thing

In the Python ecosystem making software with reproducibility in mind was a thing before the advent of uv. Some earlier options include Pipenv and Poetry. I used Pipenv already some 6y ago to achieve that and later switched to Poetry.

I think devs who didn't care back then also won't care in the future and will still run around with requirements.txt file in 10 years.


There’s also the “prepaid” alternative. Especially if you’re skittish about budgets. You topup you account for $10, and when you overflow (maybe by setting an alert to around $8), you can add an extra 5$ to make it to the end without interruption.

Would you say that a display and a printer are a perfect painter because they can render images? And a speaker is a very good musician because they can produce sound?

The LLM tasks is to produce a string of words according to an internal model trained on texts written by humans (and now generted by other LLMs). This is not intelligence.


Okay, but why isn't it "intelligence"? What part of the definition does it fail? What would convince you that you're wrong?

I wouldn’t say it’s a general definition, but the consensus (according to my opinion) is that intelligence is being able to define problems (not just experience them), discern the root cause, and then solve that.

Where it fails is generally the first step. It’s kinda like the old saying “you have to ask the right question”. In all problem solving matters, the definition of problem is the first step. It may not be the hardest (we have problems that are well defined, but unresolved), but not being able to do it is often a clear indication of not being able to do the rest.

> What would convince you that you're wrong?

Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear. Issue here refer to unpleasantness or frustrating situation.

Until then, I see them as tools. Often to speed up my writing pace (generic code and generic presentation), or as a weird database where what goes in have a high probability to appear.


> Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear.

I don't know what LLMs are you using, but frontier models do this regularly for me in programming.


Without prodding it along and giving it “hints”? And monitoring it like a baby trying their first steps? If yes, please give me the name of the model so I can try it too.

Yes, mostly without those things. I regularly use Claude Opus 4.6/4.7, Gemini 3.1 Pro and GPT-5.4/5.5. For diagnosing and planning, I always use the highest thinking setting, perhaps with the exception of GPT, where xHigh is pretty costly and slow, so I tend to use High unless the problem is really hard. After the plan is done, for implementation I often use cheaper models, like Sonnet 4.6.

There’s still a lot of place for both. Because they are just a shift of perspective around the same thing: Solving a problem for someone.

Product is when you’re seeing things as the one who have the problem and designing the solution in a way that is usable. Technical is when you shift to see how the solution can be implemented and then balancing tradeoffs (mostly costs in time and monetary resources).

While the code is valuable (as it is the solution). Building it is quite easy once you have a good knowledge on both side.

The issue with AI is not in their capabilities, but in people rushing to accept the first version when there are still unknowns in the project. And then, changes costs almost as much as redoing the project properly.


Most good programmers are good at writing. If you’re capable at simultaneously writing instructions for a dumb abstract machine and have those instructions being understandable for humans, you’re clearly good at expressing at least technical ideas.

Yeah, never had a problem with explaining to AI what I want from it. That doesn't mean AI always follows what I tell it to do ...

Which AI are you using?

> I'm still holding out hope for distributed and federated git forges.

Do you know that you can just send a patch via email (assuming you're not using the gmail web client)? You can even save the diff on some hosting website and send the link via any text medium.


I say this as someone who actually ran mailservers for about 25 years, who can telnet to port 25 and type SMTP to send an email, and who is hugely found of plaintext: I'd rather quit coding than move to that workflow. I loathe every bit of the pipeline of getting a clean patch from machine A to machine B, where I control at most one of them, and having it come out the other side with the same SHA256 digest. I don't look down on people who prefer it: to each their own! But I'll never in a million years understand it. Say what you will about the GitHub-style PR process, and there's plenty to say about it!, but there's a reason that devs outside LKML and the *BSD mailing lists pretty much immediately leapt onto GitHub the moment it became widely known. It was a revelation.

In this workflow, I don't think it's meant to have the same SHA1 digest. It's a workflow that's very much designed for a handful of core contributors (who have direct repository access) and gatekept one-off patches.

Some other accepted git workflows, like rebasing onto master, or even adding a "committed-by" or "signed-off-by", don't preserve SHA1 hashes either, and it seems you don't really need that property outside of closely collaborating cliques.


I get your point and maybe my tone was snarky (not a native speaker). But why would you want an exact reproduction on the other side? The diff format is human-readable for a reason, so slight errors can be fixed quite easily (if they do happen). Extracting patches from a well-configured MUA can be done quickly too.

I've been impacted once: An action that failed to start (a PR check), then the merge button on that PR having no effect. Thankfully there was no urgency. It's a bit distressing because GitHub is kinda the engineering hub of the companies. We do have copies of the codebase on our computers and can launch build from there, but we have a process for a reason, and bypassing it is hacky.

Maybe, but for some of us, the peace of mind comes from stability and minimal friction with our tools.

Whenever I touch my config is because I get frustrated with one operation and tries to see if it can be done faster. If you use your computer like a toaster, then you wouldn’t care that much about power usage. But for me it’s a creative lab and I don’t want a generic cubicle.


If you want collaboration for your team, then a small vm with forgejo (if you need PR) is enough. It can be behind a vpn if you do not want to bother with securing it against the whole internet.

If you want to make your repos public, you could use cgit and the like.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: