Hacker Newsnew | past | comments | ask | show | jobs | submit | kippinsula's commentslogin

the other side of this is instructive too. we've sold into mid-market accounts and the decision isn't usually 'is this better' but 'what happens to me if this breaks'. the incumbent's main feature isn't functionality, it's someone else's neck on the line if it goes wrong. the winning move for a small SaaS is afaik to get a champion inside who's willing to own that risk personally, and make sure they look very good when it works.

we've been running Renovate with `minimumReleaseAge: '7 days'` across all our repos for a while now, which does basically the same thing across npm, PyPI, and Cargo in one config. the tradeoff is you're always 7 days behind on patches, but for anything touching CI or secrets tooling that feels like a fair deal. the nasty part of this class of attack is the timing window is usually sub-24h before it's pulled, so even 3 days would have caught this one.

the atomicity is the whole game. we burned time on a Postgres+SQS setup where the enqueue happened in a trigger that fired before the commit was visible to other connections. added retry logic, then polling on the worker side, then eventually moved the enqueue inside the transaction. at that point you're basically reinventing what Honker does, just with more moving parts. the 'notification sent, row not committed' class of bug is usually silent and timing-dependent, which makes it brutal to track down.

the business answer is boring: you don't sit on a browser zero-day that your own product depends on. if it leaks form somewhere else, the blog post writes itself and the trust you've built with every privacy researcher and enterprise buyer evaporates. honestly the hiring page line alone, 'we found and reported X to Mozilla', is probably worth more than the fingerprinting edge they'd keep.

reproducible images are one of those features where the payoff is mostly emotional until the day it isn't. we had an incident where two supposedly identical images on two machines had a three byte delta in a timestamp and it cost us an afternoon to bisect from the wrong end. boring win, but a real one.

How did a differing timestamp cause an incident in the first place? Curious.

My guess is it was the only obvious evidence of an attack.

Gill probaby already knows this but for the uninitiated: something logged in, did a thing to potentially every container, and then deleted any sign of it doing the thing.

all that's left is a single timestamp of a log or something getting deleted


It's an interesting concept but unfortunately I think the comment is actually AI slop so there's no real story behind it. Check the account history.

we've done both. Hetzner dedicated was genuinely fine, until a disk started throwing SMART warnings on a Sunday morning and we remembered why we pay 10x elsewhere for some things. probably less about the raw cost and more about which weekends you want back.

Isn't this nature of every dedicated server? You also take on the hardware management burden - that's why they can be insanely cheap.

But there is middleground in form of VPS, where hardware is managed by the provider. It's still way way cheaper than some cloud magic service.


VPS comes at the cost of potential for oversubscription - even from more reputable vendors. You never really know if you're actually getting what you're paying for.

They also offer dedicated VPS with guaranteed resource allocation.

Well, you gotta take all that into consideration before your build out.

You can use block storage if data matters to you.

Many services do not need to care about data reliability or can use multiple nodes, network storage or many other HA setups.


our rule for the last couple of projects has been: if the PR description doesn't explain why, it doesn't merge. code comments about why rot, but PR descriptions are timestamped and tied to the diff forever. not perfect but it's saved us more than a few times when someone asks 'why is this like this' three years later.

ran into this flavor once with a different tool, not gh. our deploy job was consistently about 8s longer than it should've been, turned out a fire-and-forget telemetry POST wasn't actually fire-and-forget when the endpoint got slow. NO_PROXY plus blackholing the host fixed it, but probably the kind of thing you shouldn't have to find via flame graph.

yeah updates are where it falls over for us. inserts were fine, reads were great, but any workflow that needed to correct a small slice of rows after the fact got painful fast. we ended up keeping the row store for the hot path and rebuliding the columnar copy overnight. probably not elegant but it stopped the bleeding.

every time something like this surfaces I'm reminded how many privacy guarantees end at the app boundary. you can do all the e2e crypto you want, the OS layer is going to do whatever it does with your strings once they hit a render path. probably an unsolvable category of bug as long as notifications need to show readable text somewhere.

> probably an unsolvable category of bug as long as notifications need to show readable text somewhere.

Let screens always show garbled pixel vomit, decoded on device only by your private AR glasses


threat model just shifts to whoever has a camera pointed at your face, but probably still an improvement.

If you want security through obscurity you can revert to IPoAC (RFC 1149).

Speech capable avians can spontaneously leak secrets

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: