Back to the Hole Where I Came From Song
By , Contributing Columnist, Computerworld |
When is a cybersecurity hole not a hole? Never
In cybersecurity, one of the challenging issues is figuring out when a security hole is a big deal or is trivial. Apple now has a hole that pushes the definition.
In cybersecurity, one of the more challenging issues is deciding when a security hole is a big deal, requiring an immediate fix or workaround, and when it's trivial enough to ignore or at least deprioritize. The tricky part is that much of this involves the dreaded security by obscurity, where a vulnerability is left in place and those in the know hope no one finds it. (Classic example: leaving a sensitive web page unprotected, but hoping that its very long and non-intuitive URL isn't accidentally found.)
And then there's the real problem: in the hands of a creative and well-resourced bad guy, almost any hole can be leveraged in non-traditional ways. But — there is always a but in cybersecurity — IT and security pros can't pragmatically fix every single hole anywhere in the environment.
As I said, it's tricky.
What brings this to mind is an intriguing M1 CPU hole found by developer Hector Martin, who dubbed the hole M1racles and posted detailed thoughts on it.
Martin describes it as "a flaw in the design of the Apple Silicon M1 chip [that] allows any two applications running under an OS to covertly exchange data between them, without using memory, sockets, files, or any other normal operating system features. This works between processes running as different users and under different privilege levels, creating a covert channel for surreptitious data exchange. The vulnerability is baked into Apple Silicon chips and cannot be fixed without a new silicon revision."
Martin added: "The only mitigation available to users is to run your entire OS as a VM. Yes, running your entire OS as a VM has a performance impact" and then suggested that users not do this because of the performance hit.
Here's where things get interesting. Martin argues that, as a practical matter, this is not a problem.
"Really, nobody's going to actually find a nefarious use for this flaw in practical circumstances. Besides, there are already a million side channels you can use for cooperative cross-process communication—e.g. cache stuff—on every system. Covert channels can't leak data from uncooperative apps or systems. Actually, that one's worth repeating: Covert channels are completely useless unless your system is already compromised."
Martin had initially said that this flaw could be easily mitigated, but he's changed his tune. "Originally I thought the register was per-core. If it were, then you could just wipe it on context switches. But since it's per-cluster, sadly, we're kind of screwed, since you can do cross-core communication without going into the kernel. Other than running in EL1/0 with TGE=0 — i.e. inside a VM guest — there's no known way to block it."
Before anyone relaxes, consider Martin's thoughts about iOS: "iOS is affected, like all other OSes. There are unique privacy implications to this vulnerability on iOS, as it could be used to bypass some of its stricter privacy protections. For example, keyboard apps are not allowed to access the internet, for privacy reasons. A malicious keyboard app could use this vulnerability to send text that the user types to another malicious app, which could then send it to the internet. However, since iOS apps distributed through the App Store are not allowed to build code at runtime (JIT), Apple can automatically scan them at submission time and reliably detect any attempts to exploit this vulnerability using static analysis, which they already use. We do not have further information on whether Apple is planning to deploy these checks or whether they have already done so, but they are aware of the potential issue and it would be reasonable to expect they will. It is even possible that the existing automated analysis already rejects any attempts to use system registers directly."
This is where I get worried. The safety mechanism here is to rely on Apple's App Store people catching an app trying to exploit it. Really? Neither Apple — nor Google's Android, for that matter — have the resources to properly check out every submitted app. If it looks good at a glance, an area where professional bad guys excel, both mobile giants are likely to approve it.
In an otherwise excellent piece, Ars Technica said: "The covert channel could circumvent this protection by passing the key presses to another malicious app, which in turn would send it over the Internet. Even then, the chances that two apps would pass Apple's review process and then get installed on a target's device are farfetched."
Farfetched? Really? IT is supposed to trust that this hole won't do any damage because the odds are against an attacker successfully leveraging it, which in turn is based in Apple's team catching any problematic app? That is fairly scary logic.
This gets us back to my original point. What is the best way to deal with holes that require a lot of work and luck to be a problem? Given that no enterprise has the resources to properly address every single system hole, what is an overworked, understaffed CISO team to do?
Still, it's refreshing to have a developer find a hole and then play it down as not a big deal. But now that the hole has been made public in impressive detail, my money is on some cyberthief or ransomware extortionist figuring out how to use it. I'd give them less than a month to leverage it.
Apple needs to be pressured to fix this ASAP.
Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek, Computerworld and eWeek. Evan can be reached at eschuman@thecontentfirm.com and he can be followed at twitter.com/eschuman.
Copyright © 2021 IDG Communications, Inc.
Back to the Hole Where I Came From Song
Source: https://www.computerworld.com/article/3620889/when-is-a-cybersecurity-hole-not-a-hole-never.html