Hacker Newsnew | past | comments | ask | show | jobs | submit | afgrant's commentslogin

Does Postgres no have online backup built in? All of the other major DBMSes do.

See the documentation: https://www.postgresql.org/docs/current/backup.html

all of these various 3rd party backup tools use these things. Mostly it's QOL stuff that you get from a 3rd party tool. We use barman, very happily: https://pgbarman.org/


Hopefully barman has some longevity being under EDB assuming some hyperscaler doesn't gobble them up

Barman has been around since 2011, released under the GPL. If it does get ruined by someone, I'll replace it in my stack or fork it for maintenance myself. I'm not very worried though.

EDB has been in private equity(PE) hands since 2019 with out managing to ruin it so far. The ownership in PE hands seems to be pretty stable, so it doesn't look like the typical pump/dump mess you often see from crappy PE money.


Postgres is very "unix-y" in that everything is a separate tool. It has backup interfaces and commands but doesn't ship with a comprehensive backup management solution.

It’s the same as trying to tell everyone to stop buying gasoline for a day.


There are worse things, like switching off light and not using appliances for an "Earth hour", while making the power grid shedding the excess capacity frantically.

If you want to make a difference, do something small, but do it every day.


Once a week would be a good start. Like practicing jews do.


Yeah, I don't like these gestures that are tied to a specific day. It's blue tribe signaling with no real sacrifice necessary.

I believe the "minimalism" movement has been far more impactful in the long-run despite fewer adoptees, just by being a trend that capitalized on the benefits in its messaging. Those who optimize for minimalism consume less, but constantly.


My phone is always with me when I drive and has a data connection that a car can tether to.


Everyone: Please stop using ‘#’ as an href placeholder. Don’t make your ‘<a>’ elements rely on events. If you want to replace their events with others, go for it. But please make them work at the base level.


If your MacBook battery is recent and lasting for less than 8 hours, you’re doing something very unusual.


When the encoding relies on differentiating between lowercase and capitalized characters, you’re hitting a high level of ambiguity. Hexadecimal UUIDs don’t have this problem.


128 bits versus 64 bits is a deal breaker?


Yes. Even 64 bits is too many, really: at 6 bits per human-recognizable token, it's more than 10 tokens, which is outside the range of what almost any human can keep in their working memory. You can't even hold it in your head long enough to type it in a different window. 128 bits is completely beyond that; when confronted with 128-bits like a UUID, people just give up. Seriously, try actually typing in a UUID sometime. Even with the grouping it's incredibly difficult. Between 64-bits and 128-bits you discover that you absolutely need an automated way to transmit the data, and then how do you get it to that airgapped computer?


The more important requirement is that the id should not be guessable and not easily be brute-forced (example: YouTube private video listings). Now, what's the minimum required entropy to make it unguessable but also make it short enough to have people tell the id over the phone or something?


It’s just a 128-bit value that’s hexadecimal encoded. What can be controversial about that?


The hazard of base64 is the use of '+', '/', and '=' characters, which most HTTP engines treat as special.


Luckily the spec provides for a url safe variant (sometimes called base64url): https://datatracker.ietf.org/doc/html/rfc4648#section-5


It exists, but I wouldn’t call that “lucky”, or a selling point.


Are there modern databases that can’t readily index on a 128-bit value?


I don't think it's a matter of not being able to index on it, but that mostly random UUIDs (like v4) can lead to some interesting index fragmentation you have to stay on top of somehow.


If you use sequential ids, the IDs most in use are the higher ones. So your index is “hot at the top” and you can keep that part in memory.

This is only a problem with extremely big indexes.


Sort orders are complicated for UUIDs because of the interesting defined structure to them, and of course endian issues.

One fun and useful reference: https://devblogs.microsoft.com/oldnewthing/20190426-00/?p=10...

It's particularly interesting that Microsoft SQL Server was designed to optimize indexing for UUIDv1 machine IDs. That makes a certain amount of sense for a database cluster if IDs are sorted by machine. Of course, developers don't let developers use UUIDv1 in 2023 because those machine IDs are not secure in a general sense and can be a privacy/data leak in the worst cases.

Other databases sort/index UUIDs differently. There's no real "standard" and optimizing the storage of a UUID key is a game of playing to the strengths of your specific database.

(On one project I put some work into matching the much-better-defined ULID sort order to MS SQL Server uniqueidentifier columns for better database locality.)


Random ids require random inserts and that is bad for performance. It isn't a deal-breaker in most situations but it is a real cost that you do pay.


You already got multiple answers explaining the performance issue.

Now in many, typical applications you'd have to scale up quite significantly before this becomes a problem.

But if your application requires you to store small, new entries very quickly, you'll start to notice this even with moderate scale. Disk persistence is often the bottleneck already, and this might make it worse.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: