Hacker Newsnew | past | comments | ask | show | jobs | submit | SCdF's commentslogin

Yes, many times. Roughly once a week this year my team or an associated team can't ship changes because PRs, GitHub Actions, or some other associated mechanism is down.

Not to be glib, but being dead lowers your night time heart rate more then exercise as well.

Is having a lower night time heart rate the core goal of exercise? Is it even a goal at all? Or is it just an indicator of other goals being reached? I'm genuinely curious, I wasn't aware that the number mattered, more than what that number actually represents.


No, immediately lowering heart rate isn't a goal of exercise. The reason it's a meaningful measure at all is because a lower resting heart rate, not overnight in response to a stimulus, but a permanently lower resting heart rate, is a sign that your overall cardiorespiratory system has become more efficient in terms of how much blood it can deliver per beat, how much oxygen it can deliver per unit of blood, and how much energy can be generated per unit of oxygen in your mitochondria. When those efficiencies improve, fewer beats per minute results in the same level of work done in your cells. Thus, resting heart acts as a proxy measure of aerobic fitness, not a goal in and of itself. All of those are long-term adaptations. Conversely, there are many ways to acutely lower heart rate that are clearly not healthy. Death, obviously, but taking opioids or many other kinds of depressants, not moving ever, sleeping 23 hours a day, will all lower your average heart rate immediately without making you fitter or healthier.

The goal is improving cardiovascular fitness, and a low heart rate means your cardiovascular system is operating efficiently.

It's just another measure. It's not ceteris paribus better to have a lower one.

From the author, "Strongest hypothesis: elevated parasympathetic tone from the post-sauna cooling phase carries into sleep"

AKA, they use it as a proxy to infer a deeper state of rest and improved recovery state. Says nothing about the fatigue generated from using a sauna.


Another way of phrasing this though, is that it's in the team's power to determine process (or the lack thereof).

Regardless of success or failure you can say to what degree this is true, and to me this is really that only part of "agile" that is worth locking in.


In the team's power to determine a working process but in the scrum master's responsibility.

If you use scrum, and if you have a scrum master. You categorically don't need to do either of those things though.

> Another way of phrasing this though, is that it's in the team's power to determine process (or the lack thereof).

Which they almost never have.


Ironically the only place I encounter this is using google news, where news sites seem to detect you're in google news (I don't think these same sites do it when I'm just browing normally?), and try to upsell you their other stories before you go back to the main page.


After mucking around with various easy to use options my lack of trust[1] pushed me into a more-complicated-but-at-least-under-my-control-option: syncthing+restic+s3 compatible cloud provider.

Basically it works like this:

- I have syncthing moving files between all my devices. The larger the device, the more stuff I move there[2]. My phone only has my keepass file and a few other docs, my gaming PC has that plus all of my photos and music, etc.

- All of this ends up on a raspberry pi with a connected USB harddrive, which has everything on it. Why yes, that is very shoddy and short term! The pi is mirrored on my gaming PC though, which is awake once every day or two, so if it completely breaks I still have everything locally.

- Nightly a restic job runs, which backs up everything on the pi to an s3 compatible cloud[3], and cleans out old snapshots (30 days, 52 weeks, 60 months, then yearly)

- Yearly I test restoring a random backup, both on the pi, and on another device, to make sure there is no required knowledge stuck on there.

This is was somewhat of a pain to setup, but since the pi is never off it just ticks along, and I check it periodically to make sure nothing has broken.

[1] there is always weirdness with these tools. They don't sync how you think, or when you actually want to restore it takes forever, or they are stuck in perpetual sync cycles

[2] I sync multiple directories, broadly "very small", "small", "dumping ground", and "media", from smallest to largest.

[3] Currently Wasabi, but it really doens't matter. Restic encrypts client side, you just need to trust the provider enough that they don't completely collapse at the same time that you need backups.


I also have a lil script that rolls dice on restic snapshot, then lists files and picks a random set to restore to /dev/null.

I still trust restic checksums will actually check whether restore is correct, but that way random part of storage gets tested every so often in case some old pack file gets damaged


That's a really good idea, imma steal it.


We need to talk about The Cone of Backups(tm), which you and I seem to have separately derived!

Props for getting this implemented and seemingly trusted... I wish there was an easier way to handle some of this stuff (eg: tiny secure key material => hot syncthing => "live" git files => warm docs and photos => cold bulk movies, isos, etc)... along with selective "on demand pass through browse/fetch/cache"

They all have different policy, size, cost, technical details, and overall SLA/quality tradeoffs.


Does syncthing work yet?

~ 5 years ago, I had a development flow that involved a large source tree (1-10K files, including build output) that was syncthing-ed over a residential network connection to some k8s stuff.

Desyncs/corruptions happened constantly, even though it was a one-way send.

I've never had similar issues with rsync or unison (well, I have in unison, but that's two-way sync, and it always prompted to ask for help by design).

Anyway, my decade-old synology is dying, so I'm setting up a replacement. For other reasons (mostly a decade of systemd / pulse audio finding novel ways to ruin my day, and not really understanding how to restore my synology backups), I've jumped ship over to FreeBSD. I've heard good things about using zfs to get:

saniod + syncoid -> zfs send -> zfs recv -> restic

In the absence of ZFS, I'd do:

rsync -> restic

Or:

unison <-> unison -> restic.

So, similar to what you've landed on, but with one size tier. I have docker containers that the phone talks to for stuff like calendars, and just have the source of the backup flow host my git repos.

One thing to do no matter what:

Write at least 100,000 files to the source then restore from backup (/ on a linux VM is great for this). Run rsync in dry run / checksum mode on the two trees. Confirm the metadata + contents match on both sides. I haven't gotten around to this yet with the flow I just proposed. Almost all consumer backup tools fail this test. Comments here suggest backblaze's consumer offering fails it badly. I'm using B2, but I haven't scrubbed my backup sets in a while. I get the impression it has much higher consistency / durability.


I've personally had no major issues with syncthing, it just works in the background, the largest folder I have synced is ~6TB and 200k files which is mirroring a backup I have on a large external.

One particular issue I've encountered is that syncthing 2.x does not work well for systems w/o an SSD due to the storage backend switching to sqlite which doesn't perform as well as leveldb on HDDs, the scans of the 6TB folder was taking an excessively long time to complete compared to 1.x using leveldb. I haven't encountered any issues with mixing the use of 1.x and 2.x in my setup. The only other issues I've encountered are usually related to filename incompatibilites between filesystems.


I will say I specifically don't sync git repos (they are just local and pushed to github, which I consider good enough for now), and I am aware that syncthing is one more of those tools that does not work well with git.

syncthing is not perfect, and can get into weird states if you add and remove devices from it for example, but for my case it is I think the best option.


Anecdotally, I've been managing a Syncthing network with a file count in the ~200k range, everything synced bidirectionally across a few dozen (Windows) computers, for 9 years now; I've never seen data loss where Syncthing was at fault.


Good to know. I wonder what the difference is. We were doing things like running go build inside the source directory. Maybe it can't handle write races well on Linux/MacOS?


This is very silly. It's just a combination of that Derren Brown astrology experiment [1] and madlibs.

[1] https://www.youtube.com/watch?v=haP7Ys9ocTk


If you're interested, Community Fibre is a yes from this website


The US didn't refill it's own strategic oil reserve before it attacked and raised its own oil prices, there is no foreseeable exit strategy where Iran doesn't now effectively own and charge usage for the straight, and Russia (and Iran but I digress) are now more able to sell their oil than before, bolstering their economy and helping them continue to attack Ukraine.


Yeah, I couldn't be bothered getting my accurate chest strap out, but my watch (which is generally very close to the strap) was anywhere from 10-20 off what it was reporting. This is sitting down, 30min after a run.


tbf they have been saying they've started doing this since December, so we're only a few months in. And like most software it's an iceberg: 99% of work on not observable by users, and in spotify's case listeners are only one of presumably dozens of different users. For all we know they are shipping massive improvements to eg billing


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: