Hacker Newsnew | past | comments | ask | show | jobs | submit | dapperdrake's commentslogin

Have had to do this live in a production MtG card management application. It worked well. The owner kept their MtG card money. Lisp saved the day.

Unix and POSIX are fractally a booby trap.

Alpine Linux has a better shot at acceptable compile times.

Some FOSS software seems to maximize kernel IO last time I had a Gentoo.


> well behaved readers.

Around and around we go.


GNU coreutils is known for adding command libe options.

One of the big philosophical differences to the BSD's.

For a human being, it sucks both ways.


Welcome to building something new.

New things can be made optional and tested outside production, and should not be rolled out in an LTS edition.

Isn't this how Kernighan and late Ritchie (K&R) ended up with unix and C?

Honestly, brilliant guys.

When C got its own standards committee they even rejected Ritchie's proposal to add fat pointers to C before it was too late to add them. Instead, we got the C abstract machine.


Filesystem access is mostly treated by users as serialized ACID transactions on "files in directories."

"Managing this resource centrally" is where unix syscalls came from. An OS kernel can be used like a specialized library for ACID transactions on hardware singletons.

People then got fancy with virtual memory, interrupts, signals, time-slicing, re-entrancy, thread-safety, and injectivity.

It doesn’t matter, whether you call the "kernel library" from C, C++, Fortan, BASIC, Golang, bash, Rust, etc.


Unix embodies this, as well.

When K&R created unix and C there was still the better option of moving changes that were better to have in the "kernel" into the kernel.

Now we have "standards" that even cause headaches between Linux and BSD's.

Linux back-propagates stuff like mmap, io_uring, etc. to where it belongs. In this way it is like the original unix. And deservedly running on most servers out there.


Facetious reply:

> However, GNU software tends to work very hard to avoid arbitrary limits [1].


Yes? The quote says "tends to", and you still can cd into that directory, albeit not in a single invocation. Windows has similar limitations [0], it's just that their MAX_PATH is just 260 so it's somewhat more noticeable... and IIRC the hard limit of 32 K for paths in non-negotiable.

[0] https://learn.microsoft.com/en-us/windows/win32/fileio/maxim...


Isn’t "cd" a unix syscall , because it changes the process's working directory? There was something written somewhere that it cannot be a unix utility for this very reason, but has to be a shell built-in. The syscall is a "single operation" from the point of a single-threaded process.

What did I get wrong there?

Side note: Missing

  bash$ man 1 cd ;
Useful output

  bash$ help cd ;

Yes, it’s a shell builtin that makes the shell execute a chdir() syscall. Therefore it isn’t subject to argument length limits imposed by the kernel when executing processes. But it is still subject to path length limits imposed by the kernel’s implementation of chdir() itself. While the shell may be a GNU project (bash), the kernel generally is not (unless you are running Hurd), so this isn’t GNU’s fault per se.

However, the shell could theoretically chunk long cd arguments into multiple calls to chdir(), splitting on slashes. I believe this would be fully semantically correct: you are not losing any atomicity guarantees because the kernel doesn’t provide such guarantees in the first place for lookups involving multiple path components. I’m not surprised that bash doesn’t bother implementing this, and I don’t know if I’d call that an “arbitrary limitation” on bash’s part (as opposed to a lack of workaround for another component’s arbitrary limitation). But it would be possible.


> What did I get wrong there?

Nothing; you just missed some other considerations. For instance, Linux generally follows POSIX. That's what the 2004 version has to say about chdir's errors:

    ERRORS
    The chdir() function shall fail if:

    ...
    [ENAMETOOLONG]
        The length of the path argument exceeds {PATH_MAX} or a pathname component is longer than {NAME_MAX}.
    ...

    The chdir() function may fail if:
    ...
    [ENAMETOOLONG]
        As a result of encountering a symbolic link in resolution of the path argument, the length of the substituted pathname string exceeded {PATH_MAX}.
However, the following versions of POSIX moved the "length of the path argument exceeds {PATH_MAX}" into the "optional error" part.

Not any longer unless you keep default enabled for backwards compatibility with older Windows software.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: