Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kaspersky discloses iPhone hardware feature vital in Operation Triangulation (kaspersky.com)
102 points by Corrado on Dec 29, 2023 | hide | past | favorite | 52 comments



Hector Martin (of the Asahi Linux project) has some nice commentary on it here: https://social.treehouse.systems/@marcan/111655847458820583


> There is a vulnerability in the SoCs that I discovered and reported where cache snooping bypasses CTRR at the AMCC level. You can "write" to read only memory ranges and, as long as those writes remain in snoopable cache, they are effective even though AMCC will block them and panic when they are written back. I didn't get any money for that one because the way I exploited it didn't apply to normal macOS (I used it to patch DCP code from m1n1), but now a nation state figured out how to use it for a real exploit chain. "Whoops".

Oooff


Is this like "micro-code patchable" or hard no?


If it's the cache hardware, probably not. That's not programmable, you get whatever configuration the designers baked in, and that's it. Though as described you could likely work around this by changing the cache mode to write-through, albeit at a fairly severe performance penalty for the affected accesses.


Apple supports locking down MSR registers for the CPU's.

I would not be surprised if they added the same functionality to the GPU registers.

There are a tremendous number of debug registers in these things.

Apple has >1000 apple-specific MSR's for the CPUs, mostly used for debugging/testing

However, unlike the GPU, they are protected by more than just PPL. But you can use them from m1n1/etc and see what they do.

It is not surprising to find they have the same for GPU processor, the difference is that the GPU has no IOMMU, so forgetting to map a page that contains the register shadows is exploitable in a way that it's not for the CPU



So it probably also affects Android phones and SBC like RaspberryPi's?


No, this is all Apple hardware. Other SOCs often have equivalent features and might plausibly make the same mistake (which per the Hector Martin toot is effectively "the write back cache doesn't check security access control so you can stuff data into it to be snooped by other devices"). But this bug is Apple-only.


This only works at all because there is no IOMMU for the GPU. So they'd also have to decide not to do that as well.


Yeah, but lots and lots of hardware lives across a bus without an IOMMU. Until very recently, that was the natural state of things. To me what's notable here is that they have a snoopable cache across a security boundary, where you can get the IO device to read arbitrary data you stuffed in from the CPU, when the CPU can't actually touch the underlying storage being cached!


The bypass is Apple-specific… which it must be, because the security feature being bypassed, CTRR, is itself Apple-specific. I don’t think Android phones or Raspberry Pi even have any equivalent.


So the problem is unified memory? CPU and GPU uses the same RAM but the GPU doesn’t have proper MMU protections in place, which lets an attacker write to RAM it shouldn’t be allowed to by doing it through the GPU instead of the CPU?


GPUs doing DMA is a problem whether you have unified memory or not. This is why IOMMU exists.


But in more traditional systems the GPU would use its own physical RAM, which should at least make it isolated?


It still uses DMA for communication with the host, it's just over PCIe instead.


That thread is a more interesting read than the blog post.


> $150k bounty

That’s one heck of a way to fund Asahi


Recent XNU in the KDK also have code to do injection of AMCC errors, ECC errors, and DCS errors for testing purposes.

I bet someone figured you could do the same for the GPU.


The authors' Chaos Communications Congress conference talk about these exploits is up now, it was a great watch:

- https://www.youtube.com/watch?v=7VWNUUldBEE

Also previous discussion:

- https://news.ycombinator.com/item?id=38783112


Thanks! Macroexpanded:

4-year campaign backdoored iPhones using advanced exploit - https://news.ycombinator.com/item?id=38784073 - Dec 2023 (7 comments)

Operation Triangulation: What you get when attack iPhones of researchers - https://news.ycombinator.com/item?id=38783112 - Dec 2023 (371 comments)

How to catch a wild triangle - https://news.ycombinator.com/item?id=38034269 - Oct 2023 (43 comments)

Scan iPhone backups for traces of compromise by “Operation Triangulation” - https://news.ycombinator.com/item?id=36164340 - June 2023 (153 comments)

Targeted attack on our management with the Triangulation Trojan - https://news.ycombinator.com/item?id=36161392 - June 2023 (126 comments)

“Clickless” iOS exploits infect Kaspersky iPhones with never-before-seen malware - https://news.ycombinator.com/item?id=36154455 - June 2023 (41 comments)

Operation Triangulation: iOS devices targeted with previously unknown malware - https://news.ycombinator.com/item?id=36151220 - June 2023 (31 comments)

Others?



Wow, thanks! I've added those to the list above.


> This is no ordinary vulnerability. Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming

Which, ironically, is evidence that "security through obscurity" does work, even though the authors implicitly criticize it in the same post.


Unfortunately, they only thing they obscured was the fix from the victim, it in no way obscured the attack from the attacker. So, yea, it "worked."


It's literally broken! It didn't work. The complaint is that the path from discovery of the hole to working exploit is longer. And... that's probably true. But that's optimizing the wrong side of the equation. An open system would have surfaced this bug long ago, possibly years ago (it's not clear how old this mistake is, but the claim seems to cover multiple Apple SOCs), and the vulnerable systems would have been limited to one version of the hardware that could feasibly have been replaced via warranty coverage or whatever. Now we're all stuck trying to patch around a fundamental hole in tens (hundreds?) of millions of phones.


> An open system would have surfaced this bug long ago, possibly years ago (it's not clear how old this mistake is, but the claim seems to cover multiple Apple SOCs), and the vulnerable systems would have been limited to one version of the hardware that could feasibly have been replaced via warranty coverage or whatever.

How long did it take for Spectre and Meltdown to be discovered? Some vulnerabilities are easy to spot in an open design, but the more novel the exploit, the longer the vulnerability can hide in plain sight.


This isn't that novel though. They have the GPU set up to do cache snooping from the CPU without a protection mechanism[1] to prevent the CPU from stuffing dirty writeback entries on top of it for the GPU to see. That's not subtle or weird, and it doesn't represent a new category of attack. It's just a routine security whopper.

[1] On Intel, the IOMMU provides this. But just a cache that was aware of range-level protections (i.e. only allowed you to cache CPU-visible RAM from the CPU, etc...) would have worked.


Once it's sufficiently obscure, we call it a "private key." They're like hipster prime numbers.


Obscurity can add some cost but has no real properties of strength.


I see this very often where encryption experts and FOSS advocates pooh-pooh obfuscation and opaqueness. "It's not REAL security", they say. And they are right.

This is like those Chinese outfits that laser off the part markings on ICs so others can't copy the design as easily. Eventually someone will find out what part number it is and the mitigation will be nullified. But it IS effective for a period.

In the same way obscurity provides a time bonus that must be worked through before exploitation can begin.

Hackers tend to complain very much about this obscurity stuff "all it does is waste people's time and it's not secure" which is exactly the point. Surely by now researchers must have realized that creating a 100 percent bulletproof security implementation is very unlikely when compared to how impenetrable any modern encryption system is, due to gotchas and flaws in the implementation details. And this is why obscurity slows down the discovery.

Denuvo DRM for new games is the same way. It is very hated but does work very well for the publisher at mitigating piracy for the initial 2-3weeks after a game launches where the bulk of sales happen.


DRM isn't a relevant comparison because there the explicit intention is to prevent users from accessing data they've already been given. Everyone knows this is not possible in a strong sense, time-wasting through obfuscation is the best DRM can hope for.

The other reason obfuscation makes sense for DRM is the threat actors are individual consumers, who have limited time and resources to devote to untangling the obfuscation, and limited reward for doing so.

Normally we might expect to see many such users combining their resources to fund a solution to their problem at scale (by founding a public business or foundation for de-obfuscating DRM), but the state has made doing this punishable by prison. So DRM is rendered effective not so much through good engineering but through the legal system. This is an adequate solution for companies when the threat actors are simple civilians who want to play video games.


except in this case, the attackers figured it out ages ago and everyone else was left in the dark.

The issue with obscurity is that attackers inherintly have a far higher incentive to try and understand obfuscated, undocumented messes than security researchers.


They're talking about the analysis and understanding of the malware


ELI5 por favor, is this a class of exploits that is a function of Apple's architecture, kind of like how specter was for the whole branch prediction thing IIRC? Or is this something that can be fixed with a software update and no performance impact.


From what I understand, there was an unused hardware register not used by the OS that was the entry way for this attack. It’s been patched with no performance impact and there was an earlier HN submission with more details. It took advantage of 4 or 5 bugs that have been patched. Warning, this has been SCREAMING state actor vs state actor


No, it's at best a strange debugging feature left accessible by mistake

Fixed by blocking access to its registers, if I'm not mistaken, without performance impacts.


This is no normal vulnerability. Nation state insider?

Nation state backdoor?


From the accompanying article:

> Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake.

This screams backdoor by some powerful actors.


This appears to be used to direct write cache for testing purposes. It's not an intentional backdoor.

It's more likely an insider leak of some private headers or something that gave someone the info necessary. Or Apple left too much stuff in the debug info.

It would not be the first time either has happened.


No info one way or another, and I am not a nation-state actor, but if I was and was intending to introduce a back door to some platform, I would ideally want there to be some plausible explanation of it as an innocent mistake so that if/when it eventually got discovered everyone would think: “oh boy someone accidentally left the debug build in for this one” or “oh someone needed a godmode for testing and they accidentally enabled it in production” or whatever, rather than it being right there obvious in the code with no ambiguity. If you’re operating against active adversaries it makes sense to work on the basis that your activities will one day be uncovered and therefore to prepare the cover story ahead of time.

Additionally if you think about it if you’re an insider trying to subvert some system all your changes would still need to go through PR etc so it’s going to be pretty difficult to get some egregious backdoor through review vs figuring out a way to “accidentally” link in a test/debug version of some lib into production or something similar.


This would make sense if it was a one-off.

But apple has hundreds of debug registers. They have >1000 apple-specific MSR registers in the M3, which can be used to bypass/test lots of things, not just this, if you have enough permissions.

They were historically not locked down for the most part, just undocumented.

Apple has MSR lockdown registers that let them lock down access to MSR's (which only makes it harder, you can unlock access again later, but it means you have to be able to write the lock MSR to unlock them), and the latest kernels now lockdown most MSR's.

XNU also even has code to inject AMCC/DCS/ECC errors for testing.

Almost all of these can be seen from the KDK, and it's been that way for years.

This specific thing being a backdoor is totally implausible as a result. It's almost certainly not the only test mechanism that could be exploited due to some bug.

The CPU registers are MMIO accessible, like the GPU ones, but protected through various mechanisms that the GPU ones are not, by design (the GPU has no IOMMU).

It is much more likely they paid an insider for access to the GPU register names/info, or found it through a leak, and then started trying to see what they could do with the info, than it is that the entire thing, top to bottom, was an intentional backdoor.


That makes sense. Thanks for taking the time to explain.


I'll go a little further on why i think it's not a backdoor.

If you look at the debug info in the Kernel dev kits, you can see the internal SDK (which has existed forever - i had access to it 2 decades ago when i was working on compilers at IBM, for apple) has chip/register info in it:

    DW_AT_decl_file             0x00000014 /AppleInternal/Library/BuildRoots/8a51e4ad-7e8b-11ee-8cd8-2a65a1af8551/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.Internal.sdk/usr/local/include/EmbeddedHeaders/soc/module/dart_v14.h

    DW_AT_decl_file             0x00000017 /AppleInternal/Library/BuildRoots/8a51e4ad-7e8b-11ee-8cd8-2a65a1af8551/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.Internal.sdk/usr/local/include/EmbeddedHeaders/soc/module/p_acc0_v5.h

Just some examples. The first one is a header file containing the register names/structures/flags etc for the DART. The second is for recent p-cores. etc

Depending on what the kernel uses, sometimes they accidentally leak most of the data (more than once).

So for example, the current XNU debug info leaks the register bits/structure of the hid18 register (a p-core register):

https://gist.github.com/dberlin/ec277fc61c33419e658a17f743e1...

(I put this in a GIST because i am too lazy to try to get the formatting right in-comment)

So you can see the bits of HID18 and what they mean, just from the debug info.

Now, current XNU dumps contain info on 2-3 registers like this out of the thousand+.

But, the header files contain all the data, presumably, on CPU, GPU, DART, etc.

They have also leaked much more before.

Regardless, it is much more likely to me that someone got a recent internal SDK (which again, was at least shared with partners at various points in time), went looking through the header files, and then started testing things out, than it is to me that they engineered a backdoor from scratch into the GPU.


If you wanted to backdoor Apple's chipset, you wouldn't build one that could be closed through software without any impact whatsoever.


Remember when someone discovering a powerful vulnerability meant that the community could find a solution and maybe even create a patch for everyone else?

Feels like that reality was far more secure than the current closed-source siloed one.


ELI5/OOL: impact?


Zero-click RCE


I absolutely love how this response is "explain this like I am five years old" and "I am out of the loop" -- the people who grew up with BAUD as a nomenclature are going to die.

Here is a poem by AI:

HOLY FUCK:

https://i.imgur.com/V8HtW03.png

---

In the days of BAUD, we danced with time, Bits twirled and pirouetted, a rhythmic mime. Modems hummed, a song of distant lands, A symphony of data in unseen hands.

But now, oh now, we've left those days behind, Bandwidth's embrace, a treasure we find. No more the worries of a BAUDish plight, In the realm of speed, we soar with delight.

Gone are the struggles, the slow and the strained, Streaming through fibers, our data unchained. A digital ballet, swift and so grand, In the era of bandwidth, we boldly stand.

No longer confined to a BAUDish dream, The internet whispers, a seamless stream. Pixels paint pictures, words swiftly fly, In the limitless expanse of the bandwidth sky.

So here's to the days of BAUD, now a tale, A vintage echo in a high-speed gale. We've moved beyond, to a faster shore, In the bandwidth symphony, we dance evermore.

--

Just wow


iMessage. It's always iMessage.


True type. It's always true type.


Imagemagick. It's always Imagemagick.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: