.03ns is a frequency of 33 GHz. The chip doesn't actually clock that fast. What I think you're seeing is the front end detecting the idiom and directing the renamer to zero that register and just remove that instruction from the stream hitting the execution resources.
SUB does not have higher latency than XOR on any Intel CPU, when those operations are really performed, e.g. when their operands are distinct registers.
The weird values among those listed by you, i.e. those where the latency is less than 1 clock cycle, are when the operations have not been executed.
There are various special cases that are detected and such operations are not executed in an ALU. For instance, when the operands of XOR/SUB are the same the operation is not done and a null result is produced. On certain CPUs, the cases when one operand is a small constant are also detected and that operation is done by special circuits at the register renamer stage, so such operations do not reach the schedulers for the execution units.
To understand the meaning of the values, we must see the actual loop that has been used for measuring the latency.
In reality, the latency measured between truly dependent instructions cannot be less than 1 clock cycle. If a latency-measuring loop provides a time that when divided by the number of instructions is less than 1, that is because some of those instructions have been skipped. So that XOR-latency measuring loop must have included XORs between identical operands, which were bypassed.
> This is a bit like saying stop using Ubuntu, use Debian instead.
Not really, because Ubuntu has always acknowledged Debian and explicitly documented the dependency:
> Debian is the rock on which Ubuntu is built.
> Ubuntu builds on the Debian architecture and infrastructure and collaborates widely with Debian developers, but there are important differences. Ubuntu has a distinctive user interface, a separate developer community (though many developers participate in both projects) and a different release process.
i suspect a lot of tools will try to fetch the url without explicit user action (e.g. messengers do that kind of crap). Gotta be hard to keep keys non-revoked, which is a nice side-effect
I suppose there could be two checksums, or two hashes: the public spec that can be used by API key scanners on the client side to detect leaks, and an internal hash with a secret nonce that is used to validate that the API key is potentially valid before needing to look it up in the database.
That lets clients detect leaks, but malicious clients cant generate lots of valid-looking keys to spam your API endpoint and generate database load for just looking up API keys.
The "Nvidia on Linux compatibility" issues are something I wonder if I have side-stepped somehow either by lucky choice of GPUs, or lucky choice of Linux distros.
Was/is this a distro thing, or an actual issue?
Every Nvidia I've used [1] has worked perfectly, from the change for Xfree86 to Xorg, through the Compiz desktop wobbly window craze, to the introduction of GPGPU APIs like CUDA/OpenCL and recently Vulkan.
I do recall once helping a friend setup a Debian and a Ubuntu machine with Nvidia (which I never used before) and it took some figuring-out of how to install non-free drivers, so maybe my choices of Gentoo and Arch (not being as conservative towards non-free licenses as Debian/Ubuntu) always made it a non-issue?
I've also never had any trouble with NVIDIA on the desktop. I think most issues people have are on laptops, which have odd hybrid/dual GPU setups, and which exercise suspend/hibernate much more aggressively.
That's a good point that I hadn't considered. I've never had a laptop with Nvidia, I probably subconsciously avoided those dual GPU setups as they sounded hacky and I never really needed fast 3D on a laptop.
FWIW I have an Asus Zephyrus G14 and the dual graphics cards works pretty well in Linux in hybrid mode. It's pretty cool, certain things (games) run on the dedicated Nvidia GPU. Everything else runs on the built in AMD GPU.
I'm guessing it's because the laptops are popular enough that there's a dedicated group of people that make it work [0].
I'm still on X11, dunno what the story is like with Wayland though.
If you have sufficiently old Nvidia GPUs, eventually drivers and supporting software stops shipping with distros. I have a bunch of older laptops that support in Ubuntu existed for like 10 years ago, but drivers stopped being updated and Ubuntu dropped them from their repos.
We've had open source AMD drivers for... 20ish years now? Meanwhile Nvidia begrudgingly added drivers support in the last year or two. So maybe some recency bias.
> The "Nvidia on Linux compatibility" issues are something I wonder if I have side-stepped somehow either by lucky choice of GPUs, or lucky choice of Linux distros.
It could also be lucky consequence of what games you play and what else you do with your computer.
I was a long-time Nvidia user, and had plenty of problems with their drivers. They ranged from minor annoyances when switching between virtual consoles (which some people never do) to total system freezes when playing a particular game (which some people never play). It would have been be easy for someone else to never encounter these problems.
Since switching to AMD a couple years ago, I have been much happier.
nvidia x11 support has been pretty good for quite some time. It's nvidia wayland support that has been less than stellar. That has gotten better in the last year to year and a half now.
Now, I think it's no big issue so long as you are using a distro that supports up to date drivers. That should be about everyone now as I think even debian stable currently has decent drivers.
I know that Nvidia is integrated into the kernel and that wayland is talking to nvidia through the kernel. I also know that for accelerated rendering, wayland is talking directly to the nvidia drivers (bypassing the kernel? IDK).
But I also know that in the nvidia release notes, they've mentioned changes to improve support and functionality of wayland.
It has more to do with how you're using the cards. I don't see you mention gaming at all, that's where the biggest performance penalty and lack of support is apparent.
I just migrated to linux (Bazzite) in March, I have a RTX 3080. The only issue I ran into was that video stream compression is not supported on linux so I can't run 1440p 165hz with HDR on because my monitor doesn't support HDMI 2.1. Either I need to turn off HDR or lower refresh rate to 120hz.
Sure. And lots of people need all that I/O. But my point is that it’s not like the Mac Studio has no I/O. The outgoing Mac Pro only has 24 total lanes of PCIe 4.0 going to the switch chip that’s connected to all the PCI slots. The advent of externally route PCIe is a development in the last few years that may have factored into the change in form factor.
latency (L) and throughput (T) measurements from the InstLatx64 project (https://github.com/InstLatx64/InstLatx64) :
I couldn't find any AMD chips where the same is true.reply