Hacker Newsnew | past | comments | ask | show | jobs | submit | hackrmn's commentslogin

I feel like your sentiment mirrors my thoughts exactly on this.

Since this isn't the Reddit comment section (I hear people here prefer a bit more elaboration and argumentative nuance with their $BEVERAGE), I feel compelled to add some of my own personal experience.

I don't think Windows can be fixed anymore. I think the choices Microsoft have been doing for _decades_ now, with only the _mechanisms_ coming and going, have become endemic to Windows, a part of its identity. Copilot, for example, is just another gadget Microsoft simply cannot not put in. In '95 is was Clippy, but the deliveries never stop, and frankly I feel like an old man that finally decided to kick a bad habit because I truly see now all the empty talk from Microsoft I've heard countless amount of times before, wrapped in different packaging, and that Windows is like it is _by design_ and that it's bad for my health (in a different way than Linux can ever be, I feel).

Ever since Windows '95 the addition of slop has been accelerating, admittedly Microsoft _were_ much different then, but it's the _curve_ I am referring to, not that they were always _as bad_. Frankly, the "churn" is insane now, I think it's one or the other adage I can't recall where "available operating system" fills "available resources" and Microsoft are there to prove it.

The problem is also they are experimenting on their users to no end. I don't mind being part of the "user experiment" for "user experience" but how many decades do they need to arrive at the same fundamental conclusions -- that people prefer less bloat, and fewer interruptions in their face? Occam's Razor tells me it's rather that Microsoft is pretending to care but their agenda is their own alone (surprise).

Just the other day I had to spend 2 hours trying to "fix" some very-background OneDrive update because I suppose I am sucker enough to use OneDrive -- one of the least liked of Microsoft products I've had the misfortune to use -- with Windows using my laptop as a BitCoin farm, wasting cycles in some infinite loop produced by what evokes comparisons to those monkeys with typewriters. Half a dozen Powershell commands and 3-4 reboots later the `wsappx.exe` process finally was healthy enough to idle. These things happen constantly to people everywhere and there's little Microsoft can or wants to do anything about. It's a cost they're willing their users to pay.

To stop rambling, one of these days -- summer vacation perhaps -- I will remove the blasted thing finally (after decades of using both Windows and Linux) and grit my teeth through Linux, which I have tried avoiding only because I am on a Thinkpad and there's always another tweak that's needed for the whole thing to work as well as Windows does on a _good_ day. To be clear, I prefer Linux by and large, it's just that I want to avoid spending weekends configuring sleep, power states, Trackpoint, full-disk encryption, the docking station, etc.

The fact I am going to do it anyway, just to rid myself of the Windows experience that's just been getting worse and worse, says it all really.


Windows isn't fixable because Microsoft isn't fixable.

Microsoft's biggest and most consistent product is contempt for its users - consumers especially, but also business users.

When you understand that all of Microsoft's offerings are vectors for that contempt, the rest falls into place.

A user-centric Microsoft is an oxymoron. The company is literally incapable of it.


You're probably correct. Windows can be fixed, but it's stuck in the hands of MS who never will, so true ideas on how to fix it are little more than intellectual exercises.

> I don't think Windows can be fixed anymore. I think the choices Microsoft have been doing for _decades_ now, with only the _mechanisms_ coming and going, have become endemic to Windows, a part of its identity.

I'm not so sure that Windows is unfixable. It could probably be fixed, but doing that would require rebuilding every burned bridge back to its old standard, and probably then some, and that's something the relevant people aren't going to agree to do (since they were the ones who burned them).

Mandatory updates? Now they aren't any more.

Onedrive stole your files and deleted them? Now Onedrive is enabled/disabled on first setup.

Shitty start menu? Now you can pick which one you want, all the way back to the Windows 7 one.

Shitty right click menu? Now the old one is back.

All AI? Now there's a toggle on install to enable/disable it all.

Now settings menu sucks? Here's the old control panel back as standard.

Telemetry? How about no?

If MS did all of these things (and probably more), their trust level would rise skyhigh, since they'd be doing tangible things to fix the pain points we've all talked about. Now they've hit one point out of probably 50+, and many of the remaining ones are much harder to fix than updates being forced.


someone tries to scam, steal, beat you up. they then make efforts to stop doing that, and their trust would rise skyhigh? what does someone have to do to earn that kind of loyalty? would you apply this to anything else?

If they've given all the money back that they've scammed and otherwise made all the people they've hurt whole again, and are then continuing to provide a service people find use in, then yes. I'd probably need some time to be convinced that that's what's happened, and that they've truly changed. MS obviously isn't there, but there are theoretical worlds where this can happen.

Obviously, Microsoft can't give people back their deleted Onedrive files, but they can make good on a promise that it will never happen again (given that their efforts are founded in reality and not marketing speak), and hide behind a shield of 'that wasn't our intention'. Same goes with most other things you could complain about Windows.

If you have no reason to believe that Windows will screw you over, since MS has course-corrected on all major points of contention, then why not stick around? (The answer is that MS may change course again, but for those who haven't jumped ship, I'm sure this will provide good enough reason to stick around. It's not like the ship isn't providing them any utility. They've stuck around this long for a reason.)


yes at some point broken trust dictates that no amount of fixing will ever fix it.

> These machines are roughly the size of double-decker buses. To ship one requires 40 freight containers, three cargo planes, and 20 trucks. They are the world’s most complex objects. Each contains over one hundred thousand components, all of which have to be perfectly calibrated for the machine to produce light consistently at the right wavelength.

As a software engineer by trade, the above parable communicates to me two very important things and little else by comparison: that the machines are ultimately fragile and nowhere near "optimised", since the complexity is by own admission substantial to put it mildly; the machine is not a commodity, exactly, one of the million pieces breaking subtly likely renders it inoperable; its cost is proportional to its complexity (read: astronomic); by mere fact it's a focal point of geopolitics only supports the rest of the argument it's a machine of current stone age much like siege engines were at some point the closely guarded secret win-or-lose multiplers of feudal culture.

I mean it's certainly interesting to read about the complexity, but reducing the complexity and commoditising the whole thing is what's really going to be impressive I think :-)

I am probably speaking out against the nerd in us, and none of what I said should detract from enjoying the article or the subject, it's just that I think complexity here is the giveaway of us not having conquered UVL exactly, not quite yet :-) Or maybe we lack the right materials which would allow us to reduce the machine or make it less complex or prone to calibration related errors.


Complexity doesn't necessarily mean it's suboptimal. Lithography and nanofab are usually doing a whole range of disparate and wildly exotic processes with extreme vacuum, plasmas, electron guns; any number of crazy and dangerous process gases like H2, HF, or silane; and occasionally raw materials like iridium and rhodium. And that's all without the actual lithography. When your margin for error is measured in single atoms and your number of features per die outnumber the planet's population 2:1, physical laws start to stand in the way of simplification.

The one 'machine' encompasses more disciplines than most universities offer. It's really a whole bleeding edge factory compressed into a room.


> reducing the complexity and commoditising the whole thing is what's really going to be impressive I think

What do you think "cutting edge" is, or Moore's law has been?

At one point you could have written a similar article about, say, 165nm, which is now going to the scrapyard. In the past these things have always gradually got more available and easier, with higher yields - but a new, better one appears.

But at some point we're going to reach an equilibrium with physics itself. Where, even with all the complexity we can muster, it's not possible to make it easier or get smaller.


Indeed, all this reminds me of the marvel that is mechanical timekeeping - incredibly complex engineering that would ultimately be surpassed by dirt cheap electronics.

What is the corresponding revolution in chip production? I imagine something like FPGAs for litography - a wafer that can somehow work on another wafer in a sandwich-like configuration. Such a process could potentially improve on each iteration and thus get very good, very fast.


Re-factoring code is a _panacea_ -- it's more likely factors that contributed to the code needing re-factoring in the first place, are very much in place still to contribute to the same condition repeating eventually, and another round you go. The factors that produce the causes of re-factoring, usually border on psychological causes embedded deeply within the brains of the developer or developers that are owners of the code. Habits, beliefs, convictions, even "professional traumas". Related here is Conway's Law, where the team, for all individual capacity and capability, cannot but build software that mimics the structure of the developers' ultimate (larger) organisation, thus tying the success of the former to the success of the latter. Re-factoring will only largely repeat the outcome if the organisation hasn't changed.

The exception being obviously a team approaching someone else's codebase -- including that of their predecessor, if they can factor in for Conway's Law -- to re-factor it.

But the same person or persons announcing re-factoring? I always try to walk away from those discussions, knowing very well they're just going to build a better mouse trap. For themselves.

Don't get me wrong, iteration of your own then-brain's product is all well and good, but it takes _more_ to escape the carousel. It takes sitting down and noting down primary factors driving poor architecture and taking a long hard look in the mirror. Not everything is subjective or equivalent, as much as many a developer would like to believe. It's very attractive to stick to "as long as we're careful and diligent, even sub-optimal design can be implemented well". No, it won't be -- this one is a poster-child exception to the rule if there ever was one -- your _design_ is the root and from it and it alone springs the tree that you'll need to accept or cut down, and trimming it only does so much.


Did you mean to say placebo?

A panacea is a cure-all. So if code refactoring is a panacea then we should refactor code often.


I mean to write "not a panacea", my bad. That it's not the universal cure people think it is. And people _do_ think that re-factoring will magically solve problems, while it doesn't do all that much in practice, less so when you factor in the costs spent on re-factoring.

I find multiple "strange" flaws with the article, even for my appreciation of Ada _and_ the article as an essay:

* The article claims only Ada has true separation of implementation vs specification (the interface), but as far as I am able to reason, also e.g. JavaScript is perfectly able to define "private" elements (not exported by an ES6 module) while being usable in the module that declares them -- if this isn't "syntactical" (and semantical) separation like what is prescribed to Ada, what is the difference(s) the article tries to point out?

* Similarly, Java is mentioned where `private` apparently (according to the article) makes the declaration "visible to inheritance, to reflection, and to the compiler itself when it checks subclass compatibility" -- all of which is false if I remember my Java correctly -- a private declaration is _not_ visible to inheritance and consequently the compiler can ignore it / fast-track in a subclass since it works much the same as it has, in the superclass, making the "compatibility" a guarantee by much the same consequence

I am still reading the article, but having discovered the above points, it detracts from my taking it as seriously as I set out to -- wanting to identify value in Ada that we "may have missed" -- a view the article very much wants to front.


> The article claims only Ada has true separation of implementation vs specification (the interface), but as far as I am able to reason, also e.g. JavaScript is perfectly able to define "private" elements (not exported by an ES6 module) while being usable in the module that declares them -- if this isn't "syntactical" (and semantical) separation like what is prescribed to Ada, what is the difference(s) the article tries to point out?

This is false. For example in Ada you can write:

    package Foo
        type Bar is private;
        procedure Initialize (Item : in out Bar); 
    private
        type Bar is record
            Baz : Integer;
            Qux : Float;
        end record;
    end Foo;
Users of the Foo package know there is an opaque type called Bar. They can declare variables of the type, they can use the defined API to operate on it but they cannot reference the implementation defined private members (Baz, Qux) without compile errors. Yes Ada does give you the power and tools to in a very blatantly unsafe and obvious way cast it as another type or as an array of bytes or whatever but if you're doing stuff like that you have already given up.

In JavaScript there are no such protections. For example if you have a module with private class Bar and you export some functions that manipulate it:

    class Bar {
        constructor() {
            this.Baz = 420;
            this.Qux = 1337.69;
        }
    }
    export function Initialize() {
        return new Bar();
    }
In client code you have no issue inspecting and using the private values of that class:

    import { Initialize } from 'module';
    let myBar = Initialize();

    myBar.Baz = 42069; // works just fine
    Object.keys(myBar).forEach(console.log); // you can iterate parameters.
    myBar.Quux = 'Corge'; // add new parameters
    delete myBar.Baz; // I hope no functions rely on this...
Using the private parts of Bar should 100% be a compilation error and even the most broken languages would have it at least be a runtime error. Lmao JS.


It might interest you to know JS has real private fields, formally introduced in ES2022. [1]

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Who wants to bet that GP never reads that link and proceeds to continue to complain about the same outdated JavaScript issues for the next two decades.

Not to mention the fact that GP's issue only matters if you're using classes. You can define module-level variables and simply not export them and they are 100% private. Or, they can just define the variable inside of a function and protect it by a closure. I can't imagine writing multiple paragraphs of complaints about a language that I don't actually understand how to use.


I've read it and I agree private properties on classes satisfy the requirement. It does allow you to hide implementation details and I was unaware it was added; though with anything JavaScript you can usually find a way around it.

However I don't really want to talk to you. You are rude.


The reflection part is true. Private members are accessible to reflection in Java. You can call setAccessible(true) and then modify the contents of a String, for example.


LLMs are weaponized Gell-Mann amnesia when it comes to writing for humans.


Unfortunately, you're right. It is LLM-written: https://www.pangram.com/history/8b17aa57-ce1f-4f46-85f4-4db0...


I don’t need a tool to tell me that, and if it was a well-written, interesting, accurate essay, I wouldn’t care.

But it’s none of those three things.

It is, however, the result of a model trained very effectively to give humans—including hn readers—what they want.


These tools are no more truthworthy than any other LLM slop-extruder.


The article states, quoting:

"JavaScript's module system — introduced in 2015, thirty-two years after Ada's — provides import and export but no mechanism for a type to have a specification whose representation is hidden from importers."

Then:

"in Ada, the implementation of a private type is not merely inaccessible, it is syntactically absent from the client's view of the world."

Am I missing something -- a JavaScript module is perfectly able to declare a private element by simply not exporting it, accomplishing what the author prescribes to Ada as "is not merely inaccessible, it is syntactically absent from the client's view of the world"? Same would go for some of the other language author somewhat carelessly lumps together with JavaScript.

I loved the article, and I have always had curiosity about Ada -- beyond some of the more modern languages in fact -- but I just don't see where Ada separates interface from implementation in a manner that's distinctly better or different from e.g. JavaScript modules.


Assuming we’re talking about TypeScript here, because JavaScript doesn’t have exportable types… Any instance in JavaScript, whether or not its type is exported, is just an object like any other, that any other module is free to enumerate and mess with once it receives it. In Ada there are no operations on an instance of a private type except the ones provided by the source module.

In other words, if module X returns a value x of unexported type T to module Y, code in module Y is free to do x.foo = 42.


* only if `x` is _an object_ (read: has methods)

To preempt the obvious: yes, I know _everything_ (nearly) in JavaScript is an object, but a module exporting a `Function` can expect the caller to use the function, not enumerate it for methods. And the function can use a declaration in the module that wasn't exported, with the caller none the wiser about it.


I think you're confusing values with types. JS modules can certainly keep a value private, but there's no way for them to expose an opaque type, because that concept simply doesn't exist in JS. The language only has a few types, and you don't get to make more of them. TypeScript adds a lot of type mechanism on top, but because it's restricted to being strippable from the actual JS code, it doesn't fundamentally change that.


Here's an opaque type wrapping numbers, in JavaScript:

    class Age {
        #value;
        constructor(value) {
            if(typeof value != "number") throw new Error("Not a number");
            this.#value = value;
        }
    }


That field is opaque, but the entire type isn’t, no matter what you do. E.g.,

    let x = new Age();
    x.notSoOpaque = 42;
    console.log(x.notSoOpaque);
We can all agree to layer conventions on top of the language so we just don’t do stuff that violates the opacity. But the same is true of assembly language.

Assigning to `notSoOpaque` (or any other) property on an object in this case doesn't modify its behaviour, because the property isn't structurally part of the interface -- there's no code defined by the creator / owner of the object (e.g. through the class) that uses it. So it doesn't violate the contract. Private fields are inaccessible, everything else is accessible and is thus part of the interface. I am not saying (and never did) this is the same level as Ada, but your example looks contrived to me -- I don't get the relevance.

To expect is different from it being impossible.


Can't argue with that.

But in defence of JavaScript -- since it enjoys routine bashing, not always undeserved -- it now has true runtime-enforced private members (the syntax is prefixing the name with `#`, strictly as part of an ES6 class declaration), but yeah -- this doesn't invalidate the statement "kind of got there 32 years after Ada, stumbling over itself".


JavaScript has supported real data hiding since the beginning using closures. You define your object in a function. The function's local variables act as the private members of the object. They are accessible to all the methods but completely inaccessible to consumers of the object.


I completely forgot about closures. Frankly, they're still my go-to method for encapsulation, in part because the Java-isation of JavaScript done with the private class members and the onslaught of the "Alan Kay's ideas meet Simula" OOP flavour, is relatively new and I am still unsure whether it's a critical thing to have in JavaScript.


See my comment here for an example: https://news.ycombinator.com/item?id=47810686


Hey, $DEITY did its absolute best with the constraints and the requirements. But hey, can't please everyone apparently. Be happy you can relieve yourself well past the intended warranty period. The parts were designed to be easily _aftermarket_ replaceable with sufficient advances in technology, retaining the fundamental design without changes.


First, taking the opportunity this discussion presents, I'd like to state for the record, AGAIN, that I have long appreciated the Win32 API and still do -- not because it's great in and out of itself necessarily, it certainly has more warts than your average toad native to the Amazon, but because it de-facto worked for a long while through simple iteration (which grew warts too though) _and_ while it didn't demand Microsoft had everything for _everyone_, it kept Win32 development stable "at the bottom", as the "assembly" layer of Windows development, which everything else was free to build on, _in peace_. Ironically -- looking at the volume of APIs and SDKs Microsoft is churning out today, by comparison, through sheer mass and velocity -- they've proven utterly unable to be sole guardians of their own operating system. There's a plethora of articles shared on Hacker News on this inadequacy on their part to converge on some subset of software that a Windows developer can use to just start with a window or two of their own, on the screen. Win32 _gave you exactly that_. And even `CreateWindow2` export would have worked beyond what `CreateWindow` or `CreateWindowEx` couldn't provide, because you could count on someone who loved it more to just abstract it with a _thin_ layer like WxWidgets etc. Things _worked_. Now there's internal strife between the .NET and "C++ or bust" teams at Microsoft, and the downstream developers are just everything between confused and irritated, this is entirely self-inflicted, Microsoft. It's also a sign of bloat -- if the company could split these groups into subsidiaries, they could compete on actual value delivered, but under the Microsoft umbrella, the result is entirely different.

Second -- and this is a different point entirely -- not two weeks ago there was at least _two_ articles shared here which I read with a mix of mild amusement and sober agreement, about the _opposite_ of what the author of the article linked above, advocates for -- _idiomatic_ design (usually one that's internally consistent):

* https://news.ycombinator.com/item?id=47738827 ("Bring back Idiomatic Design")

* https://news.ycombinator.com/item?id=47547009 ("Make macOS consistently bad unironically")

What I am getting at is that this is clearly different people vocally preferring different -- _opposite_ -- UX experiences. From my brief stint with graphic design, I know there's no silver bullet there either -- consistency is on some level in a locked-horns conflict with creativity (which in part suggests _defiance_), but it's just funny that we now have examples of both, with the above, to which I should add:

> This is why we can't have nice things!

Also, while we "peasants" argue about which way good design should lean -- someone likes their WinAmp-like alpha-blended non-uniform windows and someone else maintains anything that's not defined by the OS is sheer heresy -- the market for one or the other is kept well fueled and another round on the carousel we all go (money happily changing hands).

For my part I wish we'd settle, as much as settling can be done. The APIs should support both, but the user should get to decide, not the developer. Which is incidentally what CSS was _ideally_ kind of was supposed to give us, but we're not really there with that, and I am digressing.


The ugly truth indeed. It sucks to die for the world you won't enjoy, but sometimes it's the only viable solution. Much of our progress has been to minimise casualties and human suffering in order to sustain the world most can agree is better (than the alternatives), but it seems the period of the wave just hits the troughs farther apart, but when it hits them it's like taking breath before the water swallows you, and without training it's quite the panic and suffering (and prospect of death). We know it's in our bones but we want to forget because our bodies are made to interpret pain in the most direct and literal sense -- re-conditioning is always painful too. Strong people create weak people who create strong people, etc.

So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.


I don't want to stir up the hornet's nest here, but in my humble opinion the entire problem rests on the unabated and unchecked modern and "late-stage" capitalism model, championed by the U.S. and since exported to and sprung good root everywhere else, even in Europe where it as of yet has a few more checks and balances (which unsurprisingly draws a lot of ire from its acolytes and priests across the Atlantic).

Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.

Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".

The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.

Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.

There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.


Which part of societal model you find inferior? I thought it was mostly economics and bureaucracy.



I started using Git around 2008, if memory serves. I have made myself more than familiar with the data model and the "plumbing" layer as they call it, but it was only a year ago -- after more than two decades of using Git, in retrospect -- that a realisation started downing on me that most folks probably have a much easier time with Git than I do, _due_ to them not caring as much about how it works _or_ they just trust the porcelain layer and ignore how "the sausage is made". For me it was always either-or situation -- I still don't trust the high-level switches I discover trawling Git's manpages, unless I understand what the effect is on the _data_ (_my_ data). Conversely, I am very surgical with Git treating it as a RISC processor -- most often at the cost of development velocity, for that reason. It's started to bug me really bad as in my latest employment I am expected to commit things throughout the day, but my way of working just doesn't align with that it seems. I frequently switch context between features or even projects (unrelated to one another by Git), and when someone looks at me waiting for an answer why it takes half a day to create 5 commits I look back at them with the same puzzled look they give me. Neither of us is satisfied. I spend most of the development time _designing_ a feature, then I implement it and occasionally it proves to be a dead-end so everything needs to be scrapped or stashed "for parts", rinse, repeat. At the end of the road developing a feature I often end up with a bunch of unrelated changes -- especially if it's a neglected code base, which isn't out of ordinary in my place of work unfortunately. The unrelated changes must be dealt with, so I am sitting there with diff hunks trying to decide which ones to include, occasionally resorting to hunk _editing_ even. There's a lot of stashing, too. Rebasing is the least of my problems, incidentally (someone said rebasing is hard on Git users), because I know what it is supposed to do (for me), so I deal with it head on and just reduce the whole thing to a series of simpler merge conflict resolution problems.

But even with all the Git tooling under my belt, I seem to have all but concluded that Git's simplicity is its biggest strength but also not a small weakness. I wish I didn't have to account for the fact that Git stores snapshots (trees), after all -- _not_ patch-files it shows or differences between the former. Rebasing creates copies or near-copies and it's impossible to isolate features from the timeline their development intertwines with. Changes in Git aren't commutative, so when my human brain naively things I could "pick" features A, B, and C for my next release, ideally with bugfixes D, E and F too, Git just wants me a single commit, except that the features and/or bugfixes may not all neatly lie along a single shared ancestral stem, so either merging is non-trivial (divergence of content compounded with time) or I solve it by assembling the tree _manually_ and using `git commit-tree` to just not have to deal with the more esoteric merge strategies. All these things _do_ tell me there is something "beyond Git" but it's just intuition, so maybe I am just stupid (or too stupid for Git)?

I started looking at [Pijul](https://pijul.org/) a while ago, but I feel like a weirdo who found a weird thing noone is ever going to adopt because it's well, weird. I thought relying on a "theory of patches" was more aligned with how I thought a VCS may represent a software project in time, but I also haven't gotten far with Pijul yet. It's just that somewhere between Git and Pijul, somewhere there is my desired to find a better VCS [than Git], and I suspect I am not the only one -- hence the point of the article, I guess.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: