Hacker Newsnew | past | comments | ask | show | jobs | submit | orlp's commentslogin

What is your Python version? Splice was added in 3.10.

https://docs.python.org/3/library/os.html#os.splice


Edited with an update.

Vec::set_len is by no means deprecated. The lint you linked only covers a very specific unsound pattern using set_len.

Indeed, and it doesn't need to be deprecated, because it's an API explicitly designed to give you low-level control where you need it, and because it is appropriately defined as an `unsafe` function with documented safety invariants that must be manually upheld in order for usage to be memory-safe. The documentation also suggests several other (safe) functions that should be used instead when possible, and provides correct usage examples: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.set... .

> and because it is appropriately defined as an `unsafe` function with documented safety invariants that must be manually upheld in order for usage to be memory-safe.

Didn't we learn from c, and the entire raison detre for rust, is that coders cannot be trusted to follow rules like this?

If coders could "(document) safety invariants that must be manually upheld in order for usage to be memory-safe." there's be no need for Rust.

This is the tautology underlying rust as I see it


No, this is mistaken. Rust provides `unsafe` functions for operations where memory-safety invariants must be manually upheld, and then forces callers to use `unsafe` blocks in order to call those functions, and then provides tooling for auditing unsafe blocks. Want to keep unsafe code out of your codebase? Then add `#![forbid(unsafe_code)]` to your crate root, and all unsafe code becomes a compiler error. Or you could add a check in your CI that prevents anyone from merging code that touches an unsafe block without sign-off from a senior maintainer. And/or you can add unit tests for any code that uses unsafe blocks and then run those tests under Miri, which will loudly complain if you perform any memory-unsafe operations. And you can add the `undocumented_unsafe_comment` lint in Clippy so that you'll never forget to document an unsafe block. Rust's culture is that unsafe blocks should be reserved for leaf nodes in the call graph, wrapped in safe APIs whose usage does not impose manual invariant management to downstream callers. Internally, those APIs represent a relatively miniscule portion of the codebase upon which all your verification can be focused. So you don't need to "trust" that coders will remember not to call unsafe functions needlessly, because the tooling is there to have your back.

> Want to keep unsafe code out of your codebase?

And how is this feasible for a systems language? Rust becomes too impotent for its main use case if you only use safe rust.

My original point still stands... Coders historically cannot be trusted to manually manage memory, unless they're rust coders apparently

> So you don't need to "trust" that coders will remember not to call unsafe functions needlessly, because the tooling is there to have your back.

By definition, it isn't possible for a tool to reason about unsafe code, otherwise the rust compiler would do it


> And how is this feasible for a systems language? Rust becomes too impotent for its main use case if you only use safe rust.

No, this is completely incorrect, and one of the most interesting and surprising results of Rust as an experiment in language design. An enormous proportion of Rust codebases need not have any unsafe code of their own whatsoever, and even those that do tend to have unsafe blocks in an extreme minority of files. Rust's hypothesis that unsafe code can be successfully encapsulated behind safe APIs suitable for the vast majority of uses has been experimentally proven in practice. Ironically, the average unsafe block in practice is a result of needing to call a function written in C, which is a symptom of not yet having enough alternatives written in Rust. I have worked on both freestanding OSes and embedded applications written in Rust--both domains where you would expect copious usage of unsafe--where I estimate less than 5% of the files actually contained unsafe blocks, meaning a 20x reduction in the effort needed to verify them (in Fred Brooks units, that's two silver bullets worth).

> Coders historically cannot be trusted to manually manage memory, unless they're rust coders apparently

Most Rust coders are not manually managing memory on the regular, or doing anything else that requires unsafe code. I'm not exaggerating when I say that it's entirely possible to have spent your entire career writing Rust code without ever having been forced to write an `unsafe` block, in the same way that Java programmers can go their entire career without using JNI.

> By definition, it isn't possible for a tool to reason about unsafe code, otherwise the rust compiler would do it

Of course it is. The Rust compiler reasons about unsafe code all the time. What it can't do is definitely prove many properties of unsafe code, which is why the compiler conservatively requires the annotation. But there are dozens of built-in warnings and Clippy lints that analyze unsafe blocks and attempts to flag issues early. In addition, Miri provides an interpreter in which to run unsafe code which provides dynamic rather than static analysis.


> No, this is completely incorrect,

Show me system level rust code that only uses safe then... You can't because its impossible. I doesn't matter that it's a minority of files (!), the simple fact is you can't program systems without using unsafe. Rewrite the c dependencies in rust and the amount of unsafe code increases massively

> Most Rust coders are not manually managing memory on the regular

Another sidestep. If coders in general cannot be trusted to manage memory, why can a rust coder be trusted all of a sudden?

> . But there are dozens of built-in warnings and Clippy lints that analyze unsafe blocks and attempts to flag issues early.

We already had that, it wasn't enough, hence..... rust, remember?


You are missing the forest for the trees here. The goal of that's unsafe isn't to prevent you from writing unsafe code. It's to prevent you from unsafe code by accident. That was always the goal. If you reread the comments through that lens I'm sure they'll make more sense.

I think you’re deliberately being obtuse here, and if you don’t see why, you should probably reflect on your reasoning.

I’ve been using Rust for about 12 years now, and the only times I’ve had to reach for `unsafe` was to do FFI stuff. That’s it. Maybe others might have more unsafe code and for good reasons, but from my perspective, I don’t know wtf you’re talking about.


> Maybe others might have more unsafe code and for good reasons, but from my perspective, I don’t know wtf you’re talking about

"well I don't need to use unsafe that much so I don't know what your point is" sounds like you don't really have an answer.


Rust has never been about outright eliminating unsafe code, it's about encapsulating that unsafe code within a safe externally usable API.

When creating a dynamic sized array type, it's much simpler to reason about its invariants when you assume only its public methods have access to its size and length fields, rather than trust the user to remember to update those fields themselves.

The above is an analogy which is obviously fixed by using opaque accesor functions, but Rust takes it further by encapsulating raw pointer usage itself.

The whole ethos of unsafe Rust is that you encapsulate usages of things like raw pointers and mutable static variables in smaller, more easily verifiable modules rather than having everyone deal with them directly.


The issue with C is that every single use of a pointer needs to come with safety invariants (at its most basic: when you a pass a pointer to my function, do I. take ownership of your pointer or not?). You cannot legitimately expect people to be that alert 100% of the time.

Inversely, you can write whole applications in rust without ever touching `unsafe` directly, so that keyword by itself signals the need for attention (both to the programmer and the reviewer or auditor). An unsafe block without a safety comment next to it is a very easy red flag to catch.


>when you a pass a pointer to my function, do I take ownership of your pointer or not?

It's honestly frustrating how prevalent this is in C, and the docs don't even tell you this, and if you guess it does take ownership and make a copy for it and you were wrong, now you just leaked memory, or if you guessed the other way now you have the potential to double-free it, use after free, or have it mutated behind your back.


The specific use case the GNU maintainer listed followed this exact pattern.

I've flagged the post, the title is editorialized, the title on the blog post is "MiniMax M2.7: The Agentic Model That Helped Build Itself" (at least at the time of writing this).


"Helped build itself" is arguably also a stretch as noted in another comment.


In university? No, absolutely not straight away.

The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.


My university also taught best practices alongside that, everytime. I am very grateful for that.


Absolutely the wrong take. You can teach CS with just pencil and paper, but that doesn’t advance the technology, it might only benefit academia in a narrow sense. CS students should be actively engineering software in addition to doing science.


Why do you find it in interesting that someone chose something mainstream? Isn't that the definition of mainstream, that it's a common choice?


Anecdotally, I'm a fan and generally pick the rarer ones. Eg I had a pair of socks and a few stickers made for my kid which had rarer pokemon.

If OP got a tattoo of Pikachu, it was interesting to me that he picked the most mainstream pokemon (I assumed he was a huge fan)

But from the response, my sample of 1 assumption checks out: "When I was planning it was obvious there'd be a Pokémon due to his (continuing) interest in those and to me as a non-fan that's the most immediately-recognizable one."


In modern usage (e.g. in gaming communities) "carries" has become not only ambitransitive but also a noun.

If something "carries" or is "a carry", it means it is so strong it metaphorically carries the rest of the setup with it. For example:

> This card carries.

> These two are the carries of the team.


> N tokens looking at N tokens is quadratic

Convolving two arrays can be done perfectly accurately in O(n log n), despite every element being combined with every other element.

Or consider the even more basic sum of products a[i] * b[j] for all possible i, j:

    total = 0
    for i in range(len(a)):
        for j in range(len(b)):
            total += a[i] * b[j]
This can be computed in linear time as sum(a) * sum(b).

Your logic that 'the result contains terms of all pairs, therefore the algorithm must be quadratic' simply doesn't hold.


One of my favorite bits of my PhD dissertation was factoring an intractable 3-dimensional integral

\iiint f(x, y, z) dx dy dz = \int [\int g(x, y) dx]*[\int h(y, z) dz] dy

which greatly accelerated numerical integration (O(n^2) rather than O(n^3)).

My advisor was not particularly impressed and objectively I could have skipped it and let the simulations take a bit longer (quite a bit longer--this integration was done millions of times for different function parameters in an inner loop). But it was clever and all mine and I was proud of it.


That's like saying sorting can be done in O(n) because radix sort exists. If you assume some structure, you lose generality, i.e. there'll be some problems it's no longer able to solve. It can no longer approximate any arbitrary function that needs perfect memory over the sequence.


This brings me back to DSP class, man learning about FFT was eye-opening.


Convolution is a local operation.

Attention is a global operation.


An interesting follow-up question is, what is the smallest number unable to be encoded in 64 bits of binary lambda calculus?


BLC can output any literal 60 bit string x as the 64-bit (delimited) program 0010 x, so in that sense it would be some 61 bit number. But if ask about just lambda calculus terms without the binary input, then I think it would be some small number of at most 10 bits. BBλ looks at the normal form size so it cannot even reach numbers 0,1,2,3, and 5.


Bloom filters also become full.

As it fills up the false probability rate goes up. Once the false probability rate reaches the threshold of unacceptability, the bloom filter is full, and you can no longer insert into it.

That most interfaces still let you do something that looks like an insert is an interface failure, not a bloom filter feature.

If you find this controversial and want to reply "I don't have a threshold of unacceptability", I'll counter that a false probability rate of 100% will be reached eventually. And if you still find that acceptable, you can trivially modify any probabilistic filter to "never become full" by replacing the "is full" error condition with setting a flag that all future queries should return a false positive.


Same author.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: