Hacker Newsnew | past | comments | ask | show | jobs | submit | simoncion's commentslogin

I'm not sure why this got flagged... I wonder if someone misunderstood what "My hope is some defense tech drone stuff blows up..." meant.

As for

> ...remote work hollowed it out...

there's a simple solution: make it so that ordinary folks can actually afford to raise a family in the city. If folks can afford to raise a family in your city, then they're there for more than just the big paycheck, and won't run screaming the moment their well-paying job stops chaining them to an absurdly expensive city.


> My hunch is the people are competing for housing are mostly new AI sector transplants...

I've absolutely nothing against transplants [0], but transplants that treat this place as the "mining town" that the major landowners and Supervisors have made it to be definitely suck the life and the weird out of the city.

There may be be a bunch of people driving up rents for the city's criminally-scarce -and frequently substandard- housing, but that doesn't mean that the city's not in deep shit. I roam around the city a whole lot on foot, and I see so, so many shuttered businesses and empty storefronts... even in places that were going gangbusters ten, fifteen years ago. The only places that seem to be doing quite well are the places that serve the poorest in the city -such as much of the Tenderloin-, [1] which are places that these new transplants would probably never, ever set foot in.

[0] I'm sure that you don't, either... not in general, anyway.

[1] I can't explain why these places are still doing great. Perhaps it's because the landowners and business managers understand that there's absolutely no way that they can get anything other than a perfectly normal rate of return from these properties... so the batshit insane stuff we see in the fancier parts of town that keeps commercial spaces in fine locations empty for ten+ years and forces out healthy businesses that have been in their space for decades simply doesn't happen? [2]

[2] For folks who want to retort that I simply don't understand how any of this works: Remember that California has Property Tax Control (by way of the 197X "Proposition 13"), which means that a landowner's property tax increases by a very small percentage of its originally assessed value each year, rather than what would be that property's current assessed value. What this means is that as long as a property does not change hands, the property tax paid by that landowner pretty much never increases... and there are ways to redirect the profits from and effective control of that property to a new human or corporate entity while ensuring that that property fails to legally change hands.


I'd say that coverage is very, very substantial, but incomplete because some games use anti-cheat that is either extremely invasive and heavily relies on Windows internals, or is anti-cheat that the devs have configured to reject running in Proton.

Yes, it's very good. However, basically-every-current-multiplayer-shooter is a big missing category.

BTW as someone increasingly fed up with W11 and thus feeling homeless: how well does VR work?

> basically-every-current-multiplayer-shooter is a big missing category.

Weird. I've been playing many multiplayer shooters from Proton with my Windows-using friends. I suppose this is one of those "am I friends with people who pretty much only play CoD or Fortnite?" things.


You've been able to get Intel X520 NICs [0], with transceivers included for ~40USD on Newegg for a long time. This is a little more than double the price of Newegg's cheapest single-port 10/100/1000 copper card, but even the cheapest available such card is three times your "chicken and egg"-solving price point.

I suspect the combination of the absence of cheap-o all-in-one AP/router combo boxes with any SFP+ cages and fiber cabling's reputation of being extremely fragile have much more to do with its scarcity at the extremely low end of networking gear than anything else.

[0] This is a two-port SFP+ PCI Express card


You can get copper ones for $5.99 (quality may vary):

https://www.amazon.com/1000Mbps-Network-Performance-Gigabit-...

https://www.amazon.com/SALAN-Ethernet-Portable-Internet-Conv...

But it's not competing with those, it's competing with the copper port which is already built into most devices.

Another thing that would work is something like this (also $5.99), but with one of the ports as fibre:

https://www.amazon.com/Gigabit-Ethernet-Splitter-1000Mbps-In...

The point being you need some cheap way to plug in existing copper devices if you run fibre to the endpoints.

This plus $5 for a transceiver is pretty close at $15:

https://www.amazon.com/Gigabit-Ethernet-Converter-Auto-Negot...

But +$15 and an extra wall outlet per endpoint is still an inconvenience, and if a two-port device with its own power supply can be made for $15 then where is the PCIe/USB to fibre adapter for <$10?


> (quality may vary):

Yep. Good NICs last for approximately forever, life's way too short to deal with maybe-flaky NICs, and the price difference between the Amazon Special and something that's going to be reliable is -what- two big boxes of Cheerios? Two dozen eggs? Not. Worth it.

> But it's not competing with those, it's competing with the copper port which is already built into most devices.

Correct! That's part of why I was so very surprised to see you suggesting that extremely cheap PCI Express and USB adapters would "solve the chicken and egg problem".

> The point being you need some cheap way to plug in existing copper devices if you run fibre to the endpoints.

That's called a multi-port switch. Netgear sells five-port gigabit ones for like 20 USD. Switches that have two SFP+ cages and eight copper gigabit ports [0] are six times the price of a cheap-o Netgear switch, but are something that's going to last at least a decade. It's also pretty uncommon to find SOHO switches that have SFP+ cages and don't have at least one fixed copper port.

> This plus $5 for a transceiver is pretty close at $15:

If you're connecting a single device, why the hell would you use that when you could slap a copper SFP or SFP+ module in the switch's cage and run a cable? If you're connecting multiple devices, then either install multiple copper modules and run multiple cables, run multiple copper cables from fixed copper ports on the switch, or put a switch where the existing copper devices are.

[0] <https://mikrotik.com/product/css610_8g_2s_in>


> If you're connecting a single device, why the hell would you use that when you could slap a copper SFP or SFP+ module in the switch's cage and run a cable?

The problem to be solved is that you want to be able to put fibre inside the walls of the building instead of copper. Running a new cable to the switch closet is the thing to be prevented.

But if the wall jacks are fibre then you need some economical way of hooking them up to every printer and single-purpose device with a network port. If you have to buy another $100+ switch just to get from fibre to copper even when there is only one device near that jack, people aren't going to go for that.


Running a new cable is easy. You just use the old cable to pull the new cable. You can run composite cable if you desire copper, fiber and power.

Which is why people run only copper because that costs less than running multiple types of cable everywhere when most drops only have one device, and then pull fibre through using the existing copper cable in the rare instances where they find a need for 40Gbps or more.

But then the copper gets used for 10Gbps connections instead of fibre because it's what's already in the building.


> You can run composite cable if you desire copper, fiber and power.

Oooh. Cool.

By "power" do you mean 120/240VAC, or do you mean much lower voltage DC? I've found some Belden cabling that I think provides mains power and Ethernet, and I've found fiber cabling that I guess carries lower voltage DC, but am having a tough time finding a cable that combines fiber and copper data with mains power. Do you have an example of such a cable handy?

(Full disclosure: I'm refusing to spend more than like five minutes on the search... so I might have been able to dig up examples of such a cable.)


> The problem to be solved is that you want to be able to put fibre inside the walls of the building instead of copper. Running a new cable to the switch closet is the thing to be prevented.

...why would you ever not run copper alongside fiber for new construction? If nothing else, PoE is extremely useful, and nothing says that you actually have to connect all of that copper cable to your switch... you can connect it as-needed. I also can't imagine that most refits only have room for exactly one cable in their conduit. [0]

I'd expect to hear the sort of plan you propose from a PHB or Highly Paid Consultant, not someone who actually has had to use that sort of configuration.

Regardless, the scenario you're now proposing is one where noone other than a PHB would use that Amazon Special that you linked for media conversion.

[0] If there's no conduit and cables are all flopping around in the wall, then there's even more room for cabling.


The original problem was that everyone runs copper instead of fibre because there are too many existing devices that only have copper. Running both everywhere would require you to buy and terminate twice as much cable as you expect to use, which leads people to running only copper again.

If you chose PCs to begin with that come with fibre ethernet or put quality cards in the ones that matter then you could make fibre the default instead of copper. Until you have a number of devices like printers or VoIP phones or Raspberry Pis that have no need for 10Gbps or even 1Gbps connectivity, they just need a way to be plugged in at all. If you need to add $100+ in conversion expense to each of those devices, you're back to using copper by default.


> Running both everywhere would require you to buy and terminate twice as much cable as you expect to use...

Ah. Let's play with that logic a bit:

"Running Ethernet cabling everywhere would require you to buy and terminate far more than twice as much cable as you expect to use. Just run power cables and wire up one extra outlet for a HomePlug in each room.".

Yeah, that checks out. "Powerline Ethernet" devices are actually pretty good these days, and are right around your magic price range... Amazon has them at ~13 USD per unit. [0] Why would anyone bother running a second cable to each room? Thirteen bucks per room has to be way less than the materials and labor cost for the cable run. Doing anything else is, like, really stupid. Don't you agree?

Anyway. You expect to use the cabling that you plan to install... plus some extra for screwups, man.

[0] <https://www.amazon.com/Linksys-PLEK500-Homeplug-AV2-Powerlin...>


> It would surely sell more if people would actually explain what the game is, without using niche words like "sokoban".

Sure, sure. Here's the annotated second paragraph from TFA:

  When game designer <Stephen Lavelle> [0] (Increpare Games) released Stephen's Sausage Roll back in April 2016, it was accompanied by <a trailer> [1] that showed almost nothing about the game, yet word still spread quickly. Puzzle developers and fans praised the game for its impeccable design, teasing out layers of deep puzzling and mind-expanding discoveries from so few puzzle elements. It was also renowned for its uncompromising, yet always fair, difficulty curve, with immensely challenging puzzles from the very start. These sentiments are still held to this day, as this beloved sausage-pushing sokoban continues to influence new generations of puzzle developers, <inspiring some of the best sokoban games> [3] ever made and introducing "Sausage-likes” to the puzzle vernacular.
Clicking link [3] leads us to a page that has this as its first paragraph:

  Sokoban games, also known as block-pushing or box-pushing games, are turn-based puzzle games in which you control a character pushing or moving objects around on a grid. The genre has origins in the 1982 game Sokoban, designed by Hiroyuki Imabayashi, in which you have to push boxes around a warehouse onto designated targets. The japanese word 倉庫番 (“sōkoban”) translates to “warehouse keeper”.
Incidentally, link [3] is repeated in the final paragraph of TFA, which I will also copy and annotate:

  Learn more about Stephen's Sausage Roll in <our database of thinky games> [4], where you can also find <similar games> [5] and some of the <best sokoban games ever made>. [3]
Link [4] is pretty clear about what the game is. Did you bother to click it in order to "[l]earn more about Stephen's Sausage Roll", as it invites you to do?

[0] <https://increpare.com/>

[1] <https://www.youtube.com/watch?v=lCNqYLGwqxU>

[3] <https://thinkygames.com/lists/best-sokoban-games/>

[4] <https://thinkygames.com/games/stephens-sausage-roll/>

[5] <https://thinkygames.com/games/stephens-sausage-roll/similar/>


That's true of any company in any country. If you can convince the government that your company is sufficiently important, you can get subsidized.

My point was that the AI companies in China have already convinced the government to subsidize them.

Tax breaks, fee reductions/waivers, direct monetary incentives, and shielding from "unfavorable" regulation -whether local, state, or national- are all subsidies. Hell, depending on the particulars, government contracts can be subsidies... there's more than one government engineering project out there that could be reasonably referred to as a "jobs program for PhDs", and still more that are corporate handouts.

Every business believed by its "home" government to be sufficiently important gets subsidies when it asks for them... regardless of what nation houses that government. If your claim is that the major players in the "AI" industry aren't getting subsidies from local, state, and national governments in the US, then my claim is that you are lying.


Good for them.

> Apparently i don't understand pain...

Speaking as someone who is not-infrequently in significant pain, I sincerely hope that you never have to.


> Other than ... lack of DNS integration...

I'm confused. What do you mean by this? Does dnsmasq not put the names of DHCPv6 clients into its hostname database? If ISC DHCPd is commanded to update DNS, does it only update for DHCP clients and not DHCPv6 clients?


They probably mean that when using SLAAC - I guess the easiest way to get ipv6 connectivity - there is no equivalent to the way you can update DNS the way it would work with DHCPv4 or DHCPv6.

You pointed out one way - justuse DHCPv6, but that looses some of nice SLAAC properties.

A different way is to run mdns and let the devices announce their own hostnames.local.

Different tradeoffs, but in practice not too difficult to get to work.

I guess one could even do both...


> You pointed out one way - justuse DHCPv6, but that looses some of nice SLAAC properties.

Android refuses to implement DHCPv6. So (if you have any Android devices in play) at best you can use DHCPv6 for some of your devices while still needing to also have SLAAC. And yes, mDNS might work, but that's another service (or two, right? One to resolve other devices, another to advertise this device) to run on every device, and you'd better hope that every device can run the needed services. Which... actually brings us back to Android; AFAICT, Android can resolve mDNS but doesn't show up itself. As someone who can and does SSH to my phone (termux), this is kind of a sticking point.


> Android refuses to implement DHCPv6.

You should read [0]. It's... pretty amazing.

[0] <https://android-developers.googleblog.com/2025/09/simplifyin...>


> They probably mean that when using SLAAC...

If that's the case, then you've got to think of SLAAC as operating exactly like IPv4 address autoconfiguration (sometimes called "IPv4LL")... except that you usually get globally-routable IP addresses out of it.

If you want the management niceties that you often get when using DHCP, then you have to use DHCP.

Some very loud purists might say "SLAAC is the only way to use IPv6!". I completely ignore the convenience of LAN-side prefix delegation and say two things:

1) "Good luck with telling your IPv6 clients about things like your preferred NTP server."

2) "For ages, Router Advertisements have had entirely independent 'autoconfigure your addresses', 'use stateful configuration for your "other" configuration' [0], and 'use stateful configuration for your addresses' bits. It's legal to have any number of them enabled. This is a deliberate choice by the folks defining IPv6."

In general, the folks who scream about how IPv6 NAT and DHCPv6 should not exist and should never be used should be ignored... at least about that topic.

[0] Things like NTP and DNS that other good stuff that DHCP can be used to tell hosts about.


I mostly meant that DHCPv6 was an afterthought, and was complaining about the length of IPv6 addresses when they are truly random/EUI64. As a network guy who has had to write down or quickly type IP addresses down for troubleshooting thousands of times, v4 is much easier for humans to work with than a full v6.

(Oh and Android doesn't support DHCPv6 at all, but that doesn't matter much for server environments/DNS reachability).

In hindsight of EUI64 being shunned in favor of privacy addresses, plus how much of the IPv6 space is reserved for future use, I wonder if IPv6 could have achieved all of its goals with a 64 or 80 bit address instead of 128.


> I mostly meant that DHCPv6 was an afterthought...

I'd call DHCPv6 "a recognition that a complete break from how IPv4 networks have been historically managed simply is not practical for many network operators... especially the larger ones".

And it's true that the DHCPv6 RFC (3315) was published in 2003; five years after the SLAAC RFC (2462). But it's also true that 2003 is twenty-three years from today. Regardless of how you feel about the five-year gap between 2462 and 3315, DHCPv6 has been available for nearly a quarter century. DHCPv6-PD (Prefix Delegation) is how every ISP that I've had that provided IPv6 service to my home [0] has provided globally-routable address space to my LAN. I assume that it's how it's done by most ISPs who don't want to have their customers deal with manually splitting up a wider-than-64 prefix onto their LAN.

But -like I said- I've only had experience with two US ISPs. Perhaps everyone else does it differently?

> Oh and Android doesn't support DHCPv6 at all...

Oh, but it does! And in the most useless way -for an ordinary end user- possible! It uses DHCPv6... but only for prefix delegation! [1]

It's almost as if the folks who make the decisions for the Android project have never used an Android device anywhere except for a Very Large Professionally-Managed Enterprise Network.

> In hindsight of EUI64 being shunned in favor of privacy addresses...

If by "privacy addresses", you mean the "periodically generate a new temporary address and use that for new outbound connections" thing, then I shun "privacy" addresses [2]... but I recognize that I may hold an minority opinion.

> ...plus how much of the IPv6 space is reserved for future use, I wonder if IPv6 could have achieved all of its goals with a 64 or 80 bit address instead of 128.

Sure, maybe. But -IMO- it's way better to have much too much address space than to have too little. Plus, if we ever manage to stop playing the "crab bucket" game and get our asses off of this rock, we might appreciate all the extra address space as we set up very long range networks connected by very high-latency links.

Somewhat related: I've read discussions from actually-informed folks who express the opinion -given our quarter-century of hindsight- it's pretty clear that (de facto because of SLAAC) reserving 64 of the address for the host part was quite a bit of a waste of address space. I wonder if they would have made the addresses 32 bits shorter if they'd reduced the host part by 32 bits.

[0] Granted, that's only two ISPs -Comcast and Monkeybrains-, but

1) Those two ISPs span like (fuck me to death, I'm old) quite a bit more than twenty years of personal ISP history

2) Comcast is either the or one of the largest ISPs in the US. They also -notably- run an all-IPv6 infrastructure network. I don't claim that they obviously know the right way to manage IPv6 networks, but I will claim that they have a lot of experience with it.

[1] <https://android-developers.googleblog.com/2025/09/simplifyin...>

[2] in part because of the wide spread of the "use a user-configurable DUID along with the IAID" mechanism for generation of the host part of the address (rather than relying on the interface's MAC address), and in part because there are eleven-zillion ways to track a World Wide Web user that have absolutely nothing to do with that user's IP address. IMO, all the "privacy" addresses add is complication.


> Things like NTP and DNS that other good stuff that DHCP can be used to tell hosts about.

Look up RFC 6106 (published 15 years ago). Router advertisements have carried DNS resolver info for a long time now.

Once again, the old adage “IPv6 haters don’t understand IPv6” applies.

As much as I would like hosts to use the local NTP server, most will ignore the NTP server you specify in DHCP anyway, so it’s kind of a moot point.

Edit: RFC 6016 actually supersedes RFC 5006 from 2007. That’s nearly two full decades we have had DNS info in RAs. That’s the year Itanium2 came out (any greybeards here old enough to remember that one?)


> ...the old adage “IPv6 haters don’t understand IPv6” applies.

I'm an IPv6 hater. Sure. [0][1][2]

> ...RFC 6106...

Yes. I'm quite aware of the RDNSS field in RAs. In past experience from ten-ish years ago, [3] I found that it is unreliably recognized... some systems would use the data in it, and others would only ignore it. In contrast, DHCPv6 worked fine on everything I tested it on except for Android. Might this be because RFC 6106 was published in 2010, while RFC 3315 ("stateful" DHCPv6) was published in 2003, and RFC 3736 ("stateless" DHCPv6) was published in 2004? Maybe.

> ...RFC 6016 actually supersedes RFC 5006 from 2007.

An attentive reader notes that RFC 5006 is an experimental RFC. It took another four years for a non-experimental version of the standard to be published.

So, anyway. Yeah, I should have said

  Things like NTP and (sometimes) DNS and that other good stuff...
Whoops. But, my point stands... how do you communicate to clients the network's preferred NTP servers or nearly all of the other stuff that DHCPv6 communicates if one choses to use only SLAAC?

[0] <https://news.ycombinator.com/item?id=47565087>

[1] <https://news.ycombinator.com/item?id=47358229>

[2] <https://news.ycombinator.com/item?id=47101182>

[3] Perhaps things have gotten better in the intervening years? Should I find myself bored as hell one evening, I'll see what the state of device/OS support is.


> It does look a bit AI generated though

These days, when I hear a project owner/manager describe the project as a "clean room reimplementation", I expect that they got an LLM [0] to extrude it. This expectation will not always be correct, but it'll be correct more likely than not.

[0] ...whose "training" data almost certainly contains at least one implementation of whatever it is that it's being instructed to extrude...


If so, I wonder how good a LLM c++ port to plain and simple C would look.

It seems there is a signal (here on HN) that coding LLM would be really good at mass porting c++ code to plain and simple C to remove the c++ kludge dependency.


As far as LLM-produced correctness goes, it all comes down to the controls that have been put in place (how valid the tests are, does it have a microbenchmark suite, does it have memory leak detection, etc.)

There's much more to it than that. One unmentioned aspect is "Has the tooling actually tested the extruded code, or has it bypassed the tests and claimed compliance?". Another is "Has a human carefully gone over the extruded product to ensure that it's fit for purpose, contains no consequential bugs, and that the test suite tests all of the things that matter?".

There's also the matter of copyright laundering and the still-unsettled issue of license laundering, but I understand that a very vocal subset of programmers and tech management gives zero shit about those sorts of things. [0]

[0] I would argue that -most of the time- a program that you're not legally permitted to run (or distribute to others, if your intention was to distribute that program) is just as incorrect as one that produces the wrong output. If a program-extrusion tool intermittently produces programs that you're not permitted to distribute, then that tool is broken. [1]

[1] For those with sensitive knees: do note that I said "the still-unsettled issue of license laundering" in my last paragraph. Footnote zero is talking about a possible future where it is determined that the mere act of running gobs of code through an LLM does not mean that the output of that LLM is not a derived work of the code the tool was "trained" on. Perhaps license-washing will end up being legal, but I don't see Google, Microsoft, and other tech megacorps being very happy about the possibility of someone being totally free to run their cash cow codebases through an LLM, produce a good-enough "reimplementation", and stand up a competitor business on the cheap [2] by bypassing the squillions of dollars in R&D costs needed to produce those cash cow codebases.

[2] ...or simply release the code as Free Software...


E_NOREPRO

  user@ubuntu-server:~$ lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:    Ubuntu 25.10
  Release:        25.10
  Codename:       questing
  user@ubuntu-server:~$ uname -a
  Linux ubuntu-server 6.17.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Sat Oct 18 10:10:29 UTC 2025 x86_64 GNU/Linux
  user@ubuntu-server:~$ getent ahosts us.archive.ubuntu.com
  91.189.91.82    STREAM us.archive.ubuntu.com
  91.189.91.82    DGRAM  
  91.189.91.82    RAW    
  91.189.91.81    STREAM 
  91.189.91.81    DGRAM  
  91.189.91.81    RAW    
  91.189.91.83    STREAM 
  91.189.91.83    DGRAM  
  91.189.91.83    RAW    
  2620:2d:4002:1::102 STREAM 
  2620:2d:4002:1::102 DGRAM  
  2620:2d:4002:1::102 RAW    
  2620:2d:4002:1::101 STREAM 
  2620:2d:4002:1::101 DGRAM  
  2620:2d:4002:1::101 RAW    
  2620:2d:4002:1::103 STREAM 
  2620:2d:4002:1::103 DGRAM  
  2620:2d:4002:1::103 RAW    
  user@ubuntu-server:~$ ip --oneline link | grep -v lo: | awk '{ print $2 }'
  enp0s3:
  user@ubuntu-server:~$ ip addr | grep inet6
      inet6 ::1/128 scope host noprefixroute 
      inet6 fe80::5054:98ff:fe00:64a9/64 scope link proto kernel_ll 
  user@ubuntu-server:~$ fgrep -r -e us.archive /etc/apt/
  /etc/apt/sources.list.d/ubuntu.sources:URIs: http://us.archive.ubuntu.com/ubuntu/
  user@ubuntu-server:~$ sudo apt-get update
  Hit:1 http://us.archive.ubuntu.com/ubuntu questing InRelease                            
  Get:2 http://security.ubuntu.com/ubuntu questing-security InRelease [136 kB]            
  <snip>
  Get:43 http://security.ubuntu.com/ubuntu questing-security/multiverse amd64 c-n-f Metadata [252 B]
  Fetched 2,602 kB in 3s (968 kB/s) 
  Reading package lists... Done
I didn't think to wrap that in 'time', but it only took a few seconds to run... more than two and less than thirty. The IPv6 packet capture running during all that reveals that it never tried to reach out over v6 (but that my multicast group querier is happily running):

  user@ubuntu-server:~$ sudo tcpdump -i enp0s3 -s 0 -n 'ip6 or icmp6'
  tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
  listening on enp0s3, link-type EN10MB (Ethernet), snapshot length 262144 bytes
  22:16:44.327503 IP6 fe80::5054:98ff:fe00:64a9 > ff02::2: ICMP6, router solicitation, length 16
  22:17:35.823917 IP6 fe80::<REDACTED>          > ff02::1: HBH ICMP6, multicast listener query v2 [gaddr ::], length 28
  22:17:41.706930 IP6 fe80::5054:98ff:fe00:64a9 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28
I even manually ran unattended-upgrade, which looks to have succeeded. Other than unanswered router solicitations and multicast group query membership chatter, there continued to be no IPv6 communication at all, and none of the messages you reported appeared either in /var/log/syslog or on the terminal.

  user@ubuntu-server:~$ sudo /usr/bin/unattended-upgrade
  user@ubuntu-server:~$ sudo grep -e 'Tried to start delayed item' /var/log/syslog
  user@ubuntu-server:~$ 
What am I doing wrong?

You aren't running it during an external transitive failure that happened on April 15th.

The problem isn't the happy path, the problem is when things fail, and that linux, in particular made it really hard to reliably disable [0]

Once that hits someone's vagrant or ansible code, it tends to stick forever, because they don't see the value until they try to migrate, then it causes a mess.

The last update on the original post link [1] explains this. The ipv4 host being down, not having a response, it being the third Tuesday while Aquarius is rising into what ever, etc... can invoke it. It causes pains, is complex and convoluted to disable when you aren't using it, thus people are afraid to re-enable it.

[0] https://wiki.archlinux.org/title/IPv6#Disable_IPv6 [1] https://tailscale.com/blog/two-internets-both-flakey


> ...linux, in particular made it really hard to reliably disable

Section 10.1 of that Archi Wiki page says that adding 'ipv6.disable=1' to the kernel command line disables IPv6 entirely, and 'ipv6.disable_ipv6=1' keeps IPv6 running, but doesn't assign any addresses to any interfaces. If you don't like editing your bootloader config files, you can also use sysctl to do what it looks like 'ipv6.disable_ipv6=1' does by setting the 'net.ipv6.conf.all.disable_ipv6' sysctl knob to '1'.

> You aren't running it during an external transitive failure...

I'll assume you meant "transient". Given that I've already demonstrated that the only relevant traffic that is generated is IPv4 traffic, let's see what happens when we cut off that traffic on the machine we were using earlier, restored to its state prior to the updates.

We start off with empty firewall rules:

  root@ubuntu-server:~# iptables-save
  root@ubuntu-server:~# ip6tables-save
  root@ubuntu-server:~# nft list ruleset
  root@ubuntu-server:~# 
We prep to permit DNS queries and ICMP and reject all other IPv4 traffic:

  root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -p udp --dport 53 -j ACCEPT
  root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -p tcp --dport 53 -j ACCEPT
  root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -p icmp -j ACCEPT
  root@ubuntu-server:~# iptables -A INPUT  -i enp0s3 -p udp --sport 53 -j ACCEPT
  root@ubuntu-server:~# iptables -A INPUT  -i enp0s3 -p tcp --sport 53 -j ACCEPT
  root@ubuntu-server:~# iptables -A INPUT  -i enp0s3 -p icmp -j ACCEPT
  root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -j REJECT
  root@ubuntu-server:~# iptables -A INPUT  -i enp0s3 -j REJECT
  root@ubuntu-server:~#
And we do an apt-get update, which fails in less than ten seconds:

  root@ubuntu-server:~# apt-get update
  Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
  Ign:2 http://us.archive.ubuntu.com/ubuntu questing InRelease
  <snip>
  Could not connect to security.ubuntu.com:80 (91.189.92.23). - connect (111: Connection refused) Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
  <snip>
  W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/questing-security/InRelease  Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
  W: Some index files failed to download. They have been ignored, or old ones used instead.
  root@ubuntu-server:~# 
In this case, the IPv6 traffic I see is... an unanswered router solicitation, and the multicast querier chatter that I saw before. [0] What happens when we change those REJECTs into DROPs

  root@ubuntu-server:~# iptables -D OUTPUT -o enp0s3 -j REJECT
  root@ubuntu-server:~# iptables -D INPUT  -i enp0s3 -j REJECT
  root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -j DROP
  root@ubuntu-server:~# iptables -A INPUT  -i enp0s3 -j DROP
  root@ubuntu-server:~# 
...and then re-run 'apt-get update'?

  root@ubuntu-server:~# apt-get update
  Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
  Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
  Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
  Err:1 http://security.ubuntu.com/ubuntu questing-security InRelease
  Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4002:1::103). - connect (101: Network is unreachable) <v6 addrs snipped> Could not connect to security.ubuntu.com:80 (91.189.92.24), connection timed out <long line snipped>
  <redundant output snipped>
  W: Some index files failed to download. They have been ignored, or old ones used instead.
  root@ubuntu-server:~#
Exactly the same thing, except it takes like two minutes to fail, rather than ~ten seconds, and the error for IPv4 hosts is "connection timed out", rather than "Connection refused". Other than the usual RS and multicast querier traffic, absolutely no IPv6 traffic is generated.

However. The output of 'apt-get' sure makes it seem like an IPv6 connection is what's hanging, because the last thing that its "Connecting to..." line prints is the IPv6 address of the host that it's trying to contact... despite the fact that it immediately got a "Network is unreachable" back from the IPv6 stack.

To be certain that my tcpdump filter wasn't excluding IPv6 traffic of a type that I should have accounted for but did not, I re-ran tcpdump with no filter and kicked off another 'apt-get update'. I -again- got exactly zero IPv6 traffic other than unanswered router solicitations and multicast group membership querier chatter.

I'm pretty damn sure that what you were seeing was misleading output from apt-get, rather IPv6 troubles. Why? When you combine these facts:

* REJECTing all non-DNS IPv4 traffic caused apt-get to fail within ten seconds

* DROPping all non-DNS IPv4 traffic caused apt-get to fail after like two minutes.

* In both cases, no relevant IPv6 traffic was generated.

the conclusion seems pretty clear.

But, did I miss something? If so, please do let me know.

[0] I can't tell you why the last line in the 'apt-get update' output is only IPv6 hosts. But everywhere there were IPv6 hosts, the reported error was "Network is unreachable" and for IPv4 the error was "Connection refused".


This part is exactly the problem I was talking about:

  root@ubuntu-server:~# apt-get update
  ...
  Could not connect to security.ubuntu.com:80 (91.189.92.23). - connect (111: Connection refused) Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
  <snip>
  W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/questing-security/InRelease  Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
  W: Some index files failed to download. They have been ignored, or old ones used instead.
Well... in this case the output does show the failure to connect to 91.189.92.23, but that looks like a different kind of message to the "W:" lines, so maybe it doesn't show up on all setups or didn't make it into the logs on disk, or got buried under other output.

If you look at just the W: lines, it mentions a v6 address but the machine doesn't have v6 and the actual problem is the Connection Refused to the v4 address. The output is understandably misleading but ultimately the problem here has nothing to do with v6.


> ...ultimately the problem here has nothing to do with v6.

I agree... more or less. The remainder of this message is a reply to nyrikki, but I'm sticking it under your comment because you might also appreciate how weird it looks like this guy's setup is.

nyrikki: The rest of this message is directed directly at you:

============================

Actually, what's up with your link-local addresses? They have really odd flags on them.

The only way I can figure that you got into that configuration was to remove the kernel-generated link-local address and add a new one with the arguments 'scope link noprefixroute'. Even if a router on your network advertised a fe80::/64 prefix, that does nothing at all, as hosts are supposed to [0] ignore advertised prefixes that are link-local.

Yeah. After playing around with this for a bit, I can see that your network is at either least as misconfigured as one would be if -say- your DHCP server was giving leases with an invalid default gateway, or it is very, very specially configured for very special reasons.

Starting with the ubuntu-server host in the "IPv4 traffic is REJECTed" configuration from my last comment, we do this on the host to delete the kernel-supplied link-local address and instruct the OS to create an address in the link-local address space that can be used for global addresses.

  root@ubuntu-server:~# ip addr del fe80::5054:98ff:fe00:64a9/64 dev enp0s3
  root@ubuntu-server:~# ip addr add fe80::5054:98ff:fe00:64aa/64 noprefixroute dev enp0s3
  root@ubuntu-server:~# 
We then configure our upstream router to either

* Send RAs on the local link without a prefix

or

* Send RAs on the local link with a link-local prefix (so they're ignored by the Ubuntu host)

or we hard-code the address of a next-hop router on our host. One (or more) of these three things sets up the host with a default route. If you do none of them, you don't get a default route, and global traffic goes nowhere.

Then -because either you or something running on the host deleted the kernel-provisioned link-local address, and then explicitly instructed the kernel to create a link-local address that can be used to reach global addresses- the local host starts emitting IPv6 traffic with a link-local source address and a global destination address.

When presented with this sort of traffic, my router immediately sends back a ICMP6 "destination unreachable, beyond scope", which immediately terminates the connection attempt on the host, so the behavior ends up being exactly the same as when the host didn't have a misconfigured link-local address. But. You claim to be having trouble.

So, there are one or more things that might be going on that explain your trouble.

1) You have a firewall on this host that is dropping important ICMP6 traffic, causing it to miss the "this destination address is beyond your scope" message from the router. Do. Not. Do. This. ICMP is network-management traffic which tells you important things. Dropping important ICMP traffic is how you have mysterious and annoying failures.

2) Your router is configured to ignore link-local traffic with non-link-local destination addresses, rather than replying that the destination is out of scope. On the one hand, this seems stupid to me, but on the other hand, we got here through a misconfiguration that seems very unlikely to me to happen often, [1] so the router admin might not have thought about it when making "locked down" firewall rules.

3) There's some middlebox on the path to the router that's dropping your traffic because not all that many folks would expect to see link-local source and global destination, and middleboxes are widely known for dropping stuff that's even a little bit abnormal.

Investigating your misconfigured host (and maybe also connected network) has been interesting. I'd love to try to figure out if SystemD can be misconfigured to produce the host configuration that we're seeing (or if this misconfiguration is 100% bespoke), but I hear a hot burrito calling my name. Maybe I'll get bored and do more investigation later.

Also, you might object to my conclusion with "But this couldn't happen on IPv4! Clearly IPv6 is too complicated!". I would reply with "What would happen if your host couldn't get a lease from a DHCPv4 server, autoconfigured an address in the IPv4 link-local (169.254.0.0/16) address range, and the network's upstream router was configured to silently drop traffic from that subnet? At least the IPv6 link-local address range is prohibited from sending traffic off the local link [2] and fails the transmission attempt immediately."

[0] ...and Ubuntu questing does ignore such prefixes...

[1] ...that is, a link-local address that has been configured to handle global traffic...

[2] ...unless -as we've discovered- you specifically tell the OS otherwise...


> Actually, what's up with your link-local addresses? They have really odd flags on them.

They were probably configured by one of the fancy network config daemons (systemd-networkd, dhcpcd or similar). They like to take over RA processing, and they add IPs with "noprefixroute" so they can add the route themselves separately.

RAs have nothing to do with link-locals, but I bet one or the other of those daemons also takes over configuring link-local addresses and does the same thing there. If you looked in the routing table, there'll be a prefix route for fe80::/64 that was added by the daemon.

This wouldn't affect how DNS replies are sorted though. On machines without non-link-local v6, AAAA records aren't handled by trying them first and then expecting them to quickly fail. They're handled by pushing them to the bottom of the list so that the A records are tried first.


> They were probably configured by one of the fancy network config daemons (systemd-networkd, dhcpcd or similar). They like to take over RA processing, and they add IPs with "noprefixroute" so they can add the route themselves separately.

Makes sense, yeah.

While I don't see a way to do this with dhcpcd, I have no clue what Lovecraftian horrors systemd-networkd generates, so maybe it's the culprit. And whatever is doing this, this behavior is not configured by default on Ubuntu Server version Questling. Out of the box, I get regular kernel-assigned link-local addresses.

But I don't understand why you'd want to do this for link-local addresses... not automatically, anyway. It looks like doing this has the disadvantage that it erases the baked-in "This shouldn't be used for global-scope transmissions. Send back 'Network is unreachable' in those cases." rule that you get for free with the kernel-generated address. Sheesh. I wonder if there's some additional logic in a stupid daemon somewhere that manages a firewall rule that restores the "Network is unreachable" ICMPv6 response to outbound global-scope packets that come from the link-local address... just to add more moving parts that can get out-of-sync.

> This wouldn't affect how DNS replies are sorted though.

Yeah.

It's a pity that I don't work with OP. I'd rather like to take a look at this system and the network it's hooked to.


> It looks like doing this has the disadvantage that it erases the baked-in "This shouldn't be used for global-scope transmissions.

I tried with the kernel-generated LL and my kernel does attempt to use a link-local source when connecting to GUA addresses if it has no other address to connect from. And it works:

  # ssh 2001:db8::1 env | grep CLIENT
  SSH_CLIENT=fe80::f0b3:20ff:fe3d:d4cf%eth0 54456 22
(...so long as the destination is on the local network. In this case I assigned 2001:db8::1 to the router, but the router will issue an ICMPv6 redirect for other IPs on the network, which is awkward for me to test but should also work.)

I note that you didn't run `ip route add fe80::/64 dev enp0s3` after adding the LL with noprefixroute, which... seems to break surprisingly little? Because the packet gets sent to the router, which does still have a route for fe80::/64 to the same network, so it issues an ICMPv6 redirect and the client ends up doing NDP anyway.


> So, there are one or more things that might be going on that explain your trouble.

Ah, there's secret option #4:

4) This rather weird configuration has been deliberately set up by the sysadmin that manages this system and network and ordinarily works fine, but the "external transitive failure that happened on April 15th." affected both IPv4 and IPv6 traffic (which, duh, that happens frequently)... but it was an intermittent failure so unrelated changes made by OP caused him to come to the wrong conclusions and point the blame cannon at the wrong part of the system.

Okay. Burrito time!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: