Hacker Newsnew | past | comments | ask | show | jobs | submit | chrishill89's commentslogin

> It's very widely remarked that the Git CLI is pretty miserable, and as soon as a better (so I hear) alternative comes along they suddenly realise and start improving it... This happens all the time in software.

This command is implemented by just one single (but prolific) contributor. His own idea.


Yeah it’s a direct inspiration.

The git-restore(1) implementation looks like about 35 lines of code. Then add a little more complexity for some apparent common functions that needed to be factored out.

For a dedicated "restore" it's worth it to me... (who will not maintain it)


At the hidden cost of educating millions of users how git actually operates once they can't restore a file

Neither of these two commands are any more really-operates than the other.

How do you figure? Are you discarding the semantics of how people invoke git? If so why advocate for "restore" to begin with?

I don’t know what the semantics of invoking Git means?

These two commands operate on the same level of abstraction. And they should be equally powerful, which means that whichever you choose to learn will be able to serve all of your restore-content needs. That's what I mean.

Of course there is always the pedagogic cost of having two similar commands.


Well surely people who use git and who also know english will think of restoring something. You want to restore what from when? Git offers no concept of time and in fact believing that it does will hamstring your efforts to use it. That's the cost. Why cater to this concept of before and after when this undermines usage?

Didn't Go propose opt-out telemetry but then the community said no?

Compilers and whatnot seem to suffer from the same problem that programs like git(1) does. Once you've put it out there in the world you have no idea if someone will still use some corner of it thirty years from now.


Git relatively recently got an `--i-still-use-this` option for two deprecated commands that you have to run if you want to use them. The error you get tells you about it and that you should "please email us here" if you really am unable to figure out an alternative.

I guess that's the price of regular and non-invasive software.


Let's say that this is obsolete for professional programming. Can't us hobbyists be left alone to our hobby by the LLM demoralization patrol?

Not all software is developed by one software organization.

Programs to manage “stacks of patches” go back decades. That might be hundreds that have accumulated over years which are all rebased on the upstream repository. The upstream repository might be someone you barely know, or someone you haven’t managed to get a response from. But you have your changes in your fork and you need to maintain it yourself until upstream accepts it (if they ever call back).

I’m pretty sure that the Git For Windows project is managed as patches on top of Git. And I’ve seen the maintainer post patches to the Git mailing list saying something like, okay we’ve been using this for months now and I think it’s time that it is incorporated in Git.[1]

I’ve seen patches posted to the Git mailing list where they talk about how this new thing (like a command) was originally developed by someone on GitHub (say) but now someone on GitLab (say) took it over and wants to upstream it. Maybe years after it was started.

Almost all changes to the Git project need to incubate for a week in an integration branch called `next` before it is merged to `master`.[1] Beyond slow testing for Git project itself, this means that downstream projects can use `next` in their automated testing to catch regressions before they hit `master`.

† 1: Which is kind of a like a “megamerge”


Makes total sense! But what you described is like less than 5% of the use case here. Right tool for the right job and all that, what doesn't make sense is having this insanity in a "normal" software engineering setup where a single company owns and maintains the codebase, which is the vast majority of use cases.

> incorporated in Git.[1]

Dangling footnote. I decided against adding one and forgot to remove it.


Sometimes I have several pull requests to review and none of them have any meaningful overlap (e.g. they touch code in different places, no apparent risk of overlap). So I've started making integration branches to test all of them in one go. But then I sometimes find things to improve upon. Then I might make a few commits on top of that. And then later I have to manually move them to the correct branch. I might also remove them from the integration branch, but git-rebase(1) is likely to just drop them as already-applied.

My mind was a little blown when I read about the megamerge strategy in Steve Klabnik's tutorial.[1]

Yes, Jujutsu's approach of autorebasing changes is very nice. Now all I have to do is to try it myself.

† 1: https://steveklabnik.github.io/jujutsu-tutorial/advanced/sim...


With JJ I sometimes make a 'jj new a b c' to work on top of multiple changes. Then as I tweak things 'jj absorb' to automatically patch the right changes.

Insanely easy and effective.


> Do I want every PR to be a long ugly list of trivial half-done commits with messages like “fix typo” or “partial checkpoint of encabulator functionality”? No. Does everything need to be hammered down into a single wad? Also no. There is a middle ground and people should be trusted to find it, and even to have their own opinion on where it is.

IMO `git merge --squash` was perhaps a cultural mistake. Few people use that command but the apparent cultural impact seems to have been great (if that was where it came from).

GitHub squash is a bullet list of all the commit messages. That’s just inviting a ball of mud changelist[1]. But because of the all too convenient "squash" shorthand we now have to explain every time: Yes, there is a middle ground between having one monolithic commit and keeping every little trivial checkpoint and correction.

[1]: The change: the diff. The list: try to correlate each bullet point with something in the diff. (In cases where that applies, some things will have been done and undone and won’t show up in the diff.)


> Might as well squash as nobody wants to deal with untested in-between crap.

I would rather deal with nice and tidy but untested commits rather than a very well-tested but too monolithic squash. If you test the eventual merge then you have done no less testing than the squash case.

Old commits are read more than they are run.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: