"The Firefox 150 data suggests a tool that is genuinely useful for defensive security work, especially at scale, but the public record does not justify the strongest claims people want to make from it. The headline number is impressive, yet it bundles together bugs of very different significance and does not publicly resolve into a clean accounting."
I mean: Obviously. Does not matter how good or bad a product is, the current meta is to over-hype it in order to achieve maximum "news-penetration".
Anthropic seems to have sth. "real". However, Since there is no way for outsiders to calculate real metrics like false-positive rate, cost (tokens, Dev hours for setup and review, ...)/ Issue found, ... there is no real way to put any scale on the hype-graph.
If anything brakes on the Framework, you get all the docs you need to attempt a repair yourself, also spare parts are available, also, you can upgrade SSD/RAM/Mainboard and ports.
Apple: Every repair is Mainboard replacement and costs 70% of the used value of the Notebook. Upgrades are impossible. Have a nice day!
"Bonus bonus chatter: The xor trick doesn’t work for Itanium because mathematical operations don’t reset the NaT bit. Fortunately, Itanium also has a dedicated zero register, so you don’t need this trick. You can just move zero into your desired destination."
Will remember for the next time I write asm for Itanium!
Yep. The XOR trick - relying on special use of opcode rather than special register - is probably related to limited number of (general purpose) registers in typical '70 era CPU design (8080, 6502, Z80, 8086).
Unfortunately, 6502 can't XOR the accumulator with itself. I don't recall if the Z80 can, and loading an immediate 0 would be most efficient on those anyway.
XOR A absolutely works on Z80 and it's of course faster and shorter than loading a zero value with LD A,0.
LD A,0 is encoded to 2 bytes while XOR A is encoded as a single opcode.
XOR A has the additional benefit to also clear all the flags to 0. Sub A will clear the accumulator, but it will always set the N flag on Z80.
Yeah, the article seems to have missed the likely biggest reason that this is the popular x86 idiom - that it was already the popular 8080/Z80 idiom from the CP/M era, and there's a direct line (and a bunch of early 8086 DOS applications were mechanically translated assembly code, so while they are "different" architectures they're still solidly related.)
The 6502 gets by doing immediate load: 2 clock cycles, 2 bytes (frequently followed by single byte register transfer instruction). Out of curiosity I did a quick scan of the MOS 1.20 rom of the BBC micro:
Are you sure you're not an LLM? There is no way anybody writing 6502 would do anything else, because there's no other way to do it.
(You can squeeze in a cheeky Txx instruction afterwards to get a 2-or-more-for-1, if that would be what you need - but this only saves bytes. Every instruction on the 6502 takes 2+ cycles! You could have done repeated immediate loads. The cycle count would be the same and the code would be more general.)
I suppose using Txx instructions rather than LDx is more of an idiom than intended to conserve space. Also, could an LDx #0 potentially be 3 cycles in the edge case where the PC crosses a page boundary? (I'm probably confused? Red herring?)
I don't know how the 6502's PC increment actually worked, but it was an exception to the general rule of page crossings (or the possibility thereof) incurring a penalty, or, as was also sometimes the case, just ignored entirely. (One big advantage of the latter approach: doing nothing does take 0 cycles.)
The full 16 bits would be incremented after each instruction byte fetched, and it didn't cost any extra if there was a carry out of the MSB.
And [as mentioned in the article] even modern x86 implementations have a zero register. So you have this weird special opcode that (when called with identical source and destination) only triggers register renaming
A move on SPARC is technically an OR of the source with the zero register. "move %l0, %l1" is assembled as "or %g0, %l0, %l1". So if you want to zero a register you OR %g0 with itself.
Even tiny tiny CPUs can do sub in one cycle, so I doubt that. On super-scalar CPUs xor and sub are normally issued to the same execution units so it wouldn't make a difference there either.
On superscalars running xor trick as is would be significantly slower because it implies a data dependency where there isn't one. But all OOO x86's optimize it away internally.
It would probably run really fast, considering that Itanium's downfall was the difficulty in compiling. (Including translating x86 instructions into Itanium instructions)
Not really. Itanium was a result of some people at Intel being obsessed by LINPACK benchmarks and forgetting everything else. It sucked for random memory access, and hence everything that's not floating-point number-crunching. Compiler can't hide memory access latency because it's fundamentally unpredictable. VLIW does magic for floating-point latency (which is predictable), but
- As transistors got smaller, FP performance increased, memory latency stayed the same (or even increased).
- If you are doing a lot of floating point, you are probably doing array processing, so might as well go for a GPU or at least SIMD).
- Low instruction density is bad for I-cache. Yes, RISC fans, density matters! And VLIW is an absolute disaster in that regard. Again, this is less visible in number-crunching loads where the processor executes relatively small loops many times over.
Naive question: shouldn't vliw be beneficial to memory access, since each instruction does quite a lot of work, thus giving the memory time to fetch the next instruction?
- Even each instruction does a lot of work, it is supposed to do it in parallel, so time available to fetch the next instruction is (supposed to be) the same.
- Not everything is parallelisable so most of instructions words end up full of NOPs.
- The real problem are data reads. Instruction fetches are fairly predictable (and when they aren't OOO suck just as much), data reads aren't. An OOO can do something else until the data comes in. VLIV, or any in-order architecture, must stall as soon as a new instruction depends on the result of the read.
iirc alcohol is the drug with the highest amount of 3rd party harm due to the high number of people beating their spouse, children and sometimes random strangers under the influence. (+ 3rd party property, car crashes, ...)
Keep in mind this was evaluated with current laws, which bans most kinds of indoor-smoking.
Still a good idea to ban cigarettes and force people to consume their nicotine in healthier ways.
They're not banning smoking in general (which would be impossible anyway, what are they going to do, make it illegal to set something on fire and breathe it in?), they're banning nicotine products. I also really doubt that they will legalise weed and then say "but of course you're not allowed to smoke it, edibles only".
any current articles on that topic?
reply