Skip to content
Naked Security Naked Security

How to steal photos off someone’s iPhone from across the street

The bug at the heart of this is already patched - but there's a lot to learn from this story anyway.

Well-known Google Project Zero researcher Ian Beer has just published a blog post that is attracting a lot of media attention.
The article itself has a perfectly accurate and interesting title, namely: An iOS zero-click radio proximity exploit odyssey.
But it’s headlines like the one we’ve used above that capture the practical essence of Beer’s attack.
The exploit sequence he figured out really does allow an attacker to break into a nearby iPhone and steal personal data – using wireless connections only, and with no clicks needed by, or warnings shown to, the innocently occupied user of the device.
Indeed, Beer’s article concludes with a short video showing him automatically stealing a photo from his own phone using hacking kit set up in the next room:

  • He takes a photo of a “secret document” using the iPhone in one room.
  • He leaves “user” of the phone (a giant pink teddy bear, as it happens) sitting happily watching a YouTube video.
  • He goes next door and kicks off an automated over-the-air attack that exploits a kernel bug on the phone.
  • The exploit sneakily uploads malware code onto the phone, grants itelf access to the Photo app’s data directory, reads the “secret” photo file and invisibly uploads it to his laptop next door.
  • The phone continues working normally throughout, with no warnings, pop-ups or anything that might alert the user to the hack.

That’s the bad news.


The good news is that the core vulnerability that Beer relied upon is one that he himself found many months ago, reported to Apple, and that has already been patched.
(According to Beer’s report: “This specific issue was fixed before the launch of Privacy-Preserving Contact Tracing in iOS 13.5 in May 2020.“)
So if you have updated your iPhone in the past few months, you should be safe from this particular attack.
The other sort-of-good news is that it took Beer, by his own admission, six months of detailed and dedicated work to figure out how to exploit his own bug.

To give you an idea of just how much effort went into the 5-minute “teddy bear’s data theft picnic” video above, and as a fair warning if you are thinking of studying Beer’s excellent article in detail, bear in mind that his blog post runs to more than 30,000 words – longer than the novel Animal Farm by George Orwell, or A Christmas Carol by Charles Dickens.
You may, of course, be wondering why Beer bothered to take a bug he’d found and already reported, yet went to so much effort to weaponise it, to use the paramilitary jargon common in cybersecurity.
Well, Beer gives the answer himself, right at the start of his article:

The takeaway from this project should not be: no one will spend six months of their life just to hack my phone, I’m fine.
Instead, it should be: one person, working alone in their bedroom, was able to build a capability which would allow them to seriously compromise iPhone users they’d come into close contact with.

To be clear: Beer, via Google, did report the original bug promptly, and as far as we know no one else had figured it out before he did, so there is no suggestion that this bug was exploited by anyone in real life.
But the point is that it is reasonable to assume that once a kernel-level buffer overflow has been discovered, even in the face of the latest and greatest exploit mitigations, a determined attacker could produce a dangerous exploit from it.
Even though security controls such as address space layout randomisation and pointer authentication codes increase our cybersecurity enormously, they’re not silver bullets on their own.
As Mozilla rather drily puts it when fixing any memory mismangement flaws in Firefox, even apparently mild or arcane errors that the team couldn’t or didn’t figure out how to exploit themselves: “Some of these bugs showed evidence of memory corruption and we presume that with enough effort some of these could have been exploited to run arbitrary code.”
In short, finding bugs is vital; patching them is critical; learning from our mistakes is important; but we must nevertheless continue to evolve our cybersecurity defences at all times.

The road to Beer’s working attack

It’s hard to do justice to Beer’s magnum opus in a brief summary like this, but here is a (perhaps recklessly oversimplified) description of just some of the hacking skills he used:

  • Spotting a kernel variable name that sounded risky. The funky name that started it all was IO80211AWDLPeer::parseAwdlSyncTreeTLV, where TLV refers to type-length-value, a way of packaging complex data at one end for deconstructing (parsing) at the other, and AWDL is short for Apple Wireless Direct Link, the proprietary wireless mesh networking used for Apple features such as AirDrop. This function name implies the presence of complex kernel-level code that is directly exposed to untrusted data sent from other devices. This sort of code is often a source of dangerous programming blunders.
  • Finding a bug in the TLV data handling code. Beer noticed a point at which a TLV data object that was limited to a memory buffer of just 60 bytes (10 MAC addresses at most) was incorrectly “length-checked” against a generic safety limit of 1024 bytes, instead of against the actual size of the buffer available.
  • Building an AWDL network driver stack to create dodgy packets. Ironically, Beer started with an existing open source project intended to be compatible with Apple’s proprietary code, but couldn’t get it to work as he neeed. So he ended up knitting his own.
  • Finding a way to get buffer-busting packets past safety checks that existed elsewhere. Althouth the core kernel code was defective, and didn’t do its final error checking correctly, there were several partial precursor checks that made the attack much harder. By the way, as Beer points out, it’s tempting, in low-level code – especially if it is performance critical – to assume that untrusted data will have been sanitised already, and therefore to skimp on error checking code at the very point it matters most. Don’t do it, especially if that critical code is in the kernel!
  • Learning how to turn the buffer overflow into a controllable heap corruption. This provided a predictable and exploitable method for using AWDL packets to force unauthorised reads from and writes into kernel memory.
  • Trying out a total 13 different Wi-Fi adapters to find a way mount the attack. Beer wanted to be able to send poisoned AWDL packets on the 5GHz Wi-Fi channels widely used today, so he had to find a network adapter he could reconfigure to meet his needs.

At this point, Beer had already reached a proof-of-concept result where most of us would have stopped in triumph.
With kernel read-write powers he could remotely force the Calc app to pop up on your phone, as long as you had AWDL networking enabled, for example while you were using the “Share” icon in the Photos app to send your own files via AirDrop.
Nevertheless, he was determined to convert this into a so-called zero-click attack, where the victim doesn’t have to be doing anything more specific that simply “using their phone” at the time.
As you can imagine, a zero-click attack is much more dangerous, because even a well-informed user wouldn’t see any tell-tale signs in advance that warned of impending trouble.
So Beer also figured out out techniques for:

  • Pretending to be a nearby device offering files to share via AirDrop. If your phone thinks that a nearby device might be one of your contacts, based on Bluetooth data it is transmitting, it will temporarily fire up AWDL to see who it is. If it isn’t one of your contacts, you won’t see any popup or other warning, but the exploitable AWDL bug will be exposed briefly via the automatically activated AWDL subsystem.
  • Extending the attack to do more than just popping up an existing app such as Calc. Beer figured out how to use his initial exploit in an detailed attack chain that could access arbitrary files on the device and steal them.

In the video above, the attack took over an app that was already running (the teddy bear was watching YouTube, if you recall); “unsandboxed” the app from inside the kernel so it was no longer limited to viewing its own data; used the app to access the DCIM (camera) directory belonging to the Photos app; stole the latest image file; and then exfiltrated it using an innocent-looking TCP connection.
Wow.

What to do?

Tip 1. Make sure you are up to date with security fixes, because the bug at the heart of Beer’s attack chain was found and disclosed by him in the first place, so it’s already been patched. Go to Settings > General > Software Update.
Tip 2. Turn off Bluetooth when you don’t need it. Beer’s attack is a good reminder that “less is more”, because he needed Bluetooth in order to turn this into a true zero-click attack.
Tip 3. Never assume that because a bug sounds “hard” that it will never be exploited. Beer admits that this one was hard – very hard – to exploit, but ultimately not impossible.
Tip 4. If you are a programmer, be strict with data. It’s never a bad idea to do good error checking.
For all the coders out there: expect the best, i.e. hope that everyone who calls your code has checked for errors at least once already; but prepare for the worse, i.e. assume that they haven’t.


37 Comments

Paul, another interesting article.
I do not know how many times Apple users have told me that they do not need to install updates or run AV on their iPhones or Macs because they are so secure.

Reply

I believe they are “more secure” than Windows, but that doesn’t make them secure

Reply

Mac OS in theory contains more potentially exploitable remote execution elements, and is less secure than Windows i a technical sense. It’s more secure in practice due to obscurity. Security via obscurity is something Apple computers can depend on. iPhones are a different story.

Reply

Seriously? Who says that?

Reply

It used to be almost a watch-cry from Apple fans. Apple pretty much cultivated the concept that Macs were as good as immune to malware. (Remember those disingenuous “I’m a Mac/I’m a PC” ads that were all over TV back in the day?) Linux users were the same, or perhaps even more outspoken about how “immune to hacking” their chosen operating system was.
You hear it less and less often from Apple fans these days, but I would suggest that a significant minority of Mac users still think of malware and cybercriminality as “something that happens to Windows users” and therefore that doesn’t really need much or any of their attention. After all, you are less likely to get a malware infection on a Mac…
…but the probability is not zero, and in any case you are still at risk from spammers, scammers and phishers even if you never have a direct encounter with malware.

Reply

Great story, would make a fantastic DefCon presentation.
Thanks for sharing your hard work Mr. Beer (great name) as well as reporting it.

Reply

Well written ‘Reader’s Digest’ of a ‘War and Peace’ length article. Hats off to Ian for his work and Paul for writing it so eloquently. Well done to both!

Reply

I didn’t read the article but if it shows you how do I actually do this, why would you want to print it?

Reply

It’s not unusual to reveal the details of an exploit after it’s been fixed (the theory being we can all learn from it)…
…in this case, however, the complexity is stupendous enough that if you can replicate the attack exactly from Ian Beer’s article alone, then you are probably Ian Beer. Take a look – it’s not for the faint-hearted :-)

Reply

IT may take a hacker six months to develop an exploit. It only takes a script kiddy a few minutes to download and use that exploit.

Reply

Such a lengthy article in which the iOS security update that fixes the issue is never mentioned? Is it 14.1? 14.2? Something older?

Reply

I think the article makes it pretty clear that iOS 14 has the issue fixed, because the fix came out “several months ago”, i.e. before iOS 14. (Thus the recommendation to use Settings > General > Software update.)
Having said that, I left the explicit notification of “when it was fixed” to Ian Beer’s article because [a] he did the work and deserves the clicks [b] checking up via his article means you will see the most recent version of his report at the time you click. However, I will repeat his note here: “This specific issue was fixed before the launch of Privacy-Preserving Contact Tracing in iOS 13.5 in May 2020.”
So there you have it. I guess that if people aren’t checking through with Ian Beer’s article I’d better add that note in here for clarity… [Note: done 2020-12-03T01:50Z]

Reply

“Turn off Bluetooth when you don’t need it” is a bit difficult when the current Covid reporting apps require it to be on at all times.

Reply

why do they need BT to be on? That doesnt make much sense to me.

Reply

Bluetooth is how coronavirus tracking apps judge whether you’ve been within X metres of person Y for more than Z minutes, assuming that Y has Bluetooth on as well.
Whether you are willing to consent to be tracked by one of these apps seems to depend on whether you think the values of X that the app can predict are accurate enough to be useful. (Were you 1m away for Z minutes indoors? 2m apart but in different rooms? Or 3m apart outside?)

Reply

If you are using a Covid reporting app you are part of the problem. My privacy is more important than compliance with draconian monitoring laws employed by 3rd world dictatorships, or “suggestions” by 1st world leaders.

Reply

I think you are painting with a broader brush than the truth demands – as usual, the answer to “How intrusive is a coronavirus tracing app?” is, “It depends.” To be fair to Apple and Google, who seem to have formed a united front on this issue, the contact tracing APIs for both iOS and Android are predicated upon an anonymised tracing system where matching up your own locations and durations is done *on your device* and therefore only you can see whether you have bumped into people who may have been actively infectious at the time, and only you can see whether the app is advising you to isolate. AFAIK, some countries tried to knit their own centralised tracing service and asked for their apps to be given special dispensation to use Bluetooth all the time, but Apple and Google said, “No.”
After all, if being tracked in general is an issue for you, then you had better not have a mobile phone at all, even one without Bluetooth, because your cellular provider already knows where you are whenever your phone is turned on. Just not precisely enough to make track-and-trace viable.
So I wouldn’t advise anyone to make a decision about Bluetooth and privacy based specifically on track-and-trace apps. I would advise you to decide whether you are happy with leaving Bluetooth in an “always on” state *anyway*. If you are not, then there is no point in installing a track-and-trace app at all because it won’t work. If you are neutral about having Bluetooth in an “always on” state, and have typically had it set like that anyway, then I suggest you decide whether you want to use a track-and-trace app on its own epidemiological merit.
If you think you will serve yourself and society better with a track-and-trace app, and that it is accurate enough to produce useful advice, and you are used to living a digital life with a mobile phone privacy configuration in which the app will “just work”, then install and use it. If not, then don’t. Privacy will be a part of your decision but merely “not installing the app because of privacy” while making no other adjustments to your digital life on privacy grounds suggests that you have not considered the overall issues broadly enough.

Reply

The same warning for “Turn off BlueTooth” applies to Android users as well. I cannot count the times that mine “suddenly” turns on when I am here at home, almost .3 miles from the nearest neighbor…Hackers. As for any “UPDATE” that requires Bluetooth…you have been schnooked…Those updates should be SMS or they are USING your paranoia to invade your phone!

Reply

Apologies to the authors…all of you guys… that I forgot to mention in my previous comment — GREAT ARTICLE!! I had an iPhone for about 3 months. The Bluetooth hackers made it impossible to use the phone. That was iOS 9 and they were definitely stealing photos and videos that I made (of my pets). Almost 3 years later, I keep up to date on all iOS vulnerabilities so that I can warn my Apple-toting relatives and acquaintances about the latest bugs. I also consider another iPhone every few months. The high price tags and uncertain security status (of Apple) make the problems with Android more bearable…

Reply

This goes to show you that nothing is safe from hackers, period. People are working hard to hack as well as people are working hard to find bugs and vulnerabilities to patch/repair these as they are identified.

Reply

To me, the most alarming thing about this story is the fact that Google-paid employees create and publicize exploits of competitors’ products in order to give Google’s products a competitive advantage. It’s exactly the same “fear, uncertainty, doubt” strategy that everybody used to hate IBM for using, and all the Apple-bashing comments in this thread prove that it works, Remember when Google’s corporate motto was “Don’t be evil”? That was a looooooong time ago…

Reply

I am not sure that the “Apple-bashing comments” in this thread prove anything about Project Zero at all. In fact, I can’t really see anyone who is *bashing* Apple.
Whether Project Zero is a thinly-disguised Google marketing exercise or not, and whether Project Zero researchers are as objective as they ought to be in the way they pitch their writeups of Google bugs versus non-Google bugs… well, those are not new topics of discussion in the cybersecurity scene.
However, in this article I have taken a neutral standpoint because I feel that Ian Beer’s write-up is worth looking at for anyone who is interested in learning more about bug hunting themselves, and what it involves. Beer’s research – which is a good example of the mix of skills and persistence that serious bug-hunters need – is a useful reminder of that epigram (was it Einstein? [*]) about innovation being 99% perspiration and 1% inspiration. (Simply put: if you aim to be a top-notch bug hunter, intrinsic brilliance is simply not enough – you need to be a craftsperson, not merely a conjurer. If I may reverse that famous open source programming metaphor from the late 1990s: you are building a cathedral, not managing a bazaar.)
Anyway, I hear you, but I am not convinced that anyone commenting here on Apple and cybersecurity has formed an “Apple bashing” opinion because they have been unduly influenced by Google propaganda. So while you are welcome both to your opinions about Project Zero and to your concerns that some people have been manipulated by it, I think it’s unfair to imply that the Naked Security readers who have commented here are not smart enough to form their own opinions, and are therefore examples of people who have been “brainwashed” :-)
[*] FWIW, it seems that phrase was a Thomas Edisonism, or at least that it is popularly attributed to him.

Reply

Seriously? As if Apple didn’t do this first. Watch the “I’m a Mac, and I’m a PC” commercials.

Reply

Ironically, Google itself went through a phase of “doing an Apple” about malware a few years ago…
Google’s Open Source head honcho, Chris DiBona, infamously called the makers of Android anti-virus software “charlatans and scammers” back in 2011. Google nevertheless went on to release its own Google Play Protect malware-blocking service (just as Apple followed up its early “malware denial” by suddenly introducing a rudimentary anti-virus filter called XProtect into macOS):
https://nakedsecurity.sophos.com/2011/11/23/googles-open-source-geezer-gets-shirty-about-security/
And in 2014, Google’s chief security engineer for Android, Adrian Ludwig, snorted at companies that produced Android anti-virus software, saying, “Do I think the average user on Android needs to install [anti-virus]? Absolutely not”:
https://nakedsecurity.sophos.com/2014/07/09/googles-android-security-chief-dont-bother-with-anti-virus-is-he-serious/
Still plenty of malware in Google’s own walled garden over the years, though, e.g.:
https://nakedsecurity.sophos.com/2016/01/30/the-secrets-of-malware-success-in-the-google-play-store/
https://nakedsecurity.sophos.com/2020/02/06/android-pulls-24-dangerous-malware-filled-apps-from-play-store/
https://nakedsecurity.sophos.com/2019/12/02/fake-android-apps-uploaded-to-play-store-by-notorious-sandworm-hackers/
https://nakedsecurity.sophos.com/2020/01/14/fleeceware-is-back-in-google-play-massive-fees-for-not-much-at-all/

Reply

Hacking peoples phones, stealing their pics? Then making a report online telling how. LOLOL. This stuff isnt new. Why they bother to mention old news again is up for question. But you would need to be a serious loser with no life to even want to do something this ethically stupid.

Reply

One important new aspect of this news (indeed, the “new” in “news” quite literally means “new”) is that the attack was carried out despite the use of one of the very latest security mechanisms available on ARM processors, as used in iPhones, namely PAC (pointer authentication code) protection.
PAC means that if you want to divert the program flow inside the kernel, it’s no longer enough to know where to divert it to (which is hard enough to figure out thanks to a protection system called ASLR, short for address space layout randomisation, that moves critical code around randomly inside the kernel so an attacker is much more likely to crash a hacked device in error than to take it over reliably).
PAC provides what is essentially a miniature “digital signature” for commands that select which code in the kernel to run next, so that even if crooks know *where* to go next, they don’t have the secret code, if you will, that authorises them to tell the kernel to go there at all. (It’s like trying to take over someone’s bank account by phoning up the bank and demanding that they change the account holder’s address and then send out a new ATM card to that address. That works fine until the bank says, “What’s the secret code we agreed on that has to be provided as well as the new adress to complete the process?” If you don’t know the one-off code, your attack won’t work.)
As for “being a serious loser” for wanting to find and fix bugs of this sort, I just don’t follow your logic. Thanks to this research (whether you like Project Zero’s marketing stance or not), a security hole in iOS was found and fixed – with Apple given time to fix it before anyone else was told about it – and numerous ideas have emerged of how to defend even better against this sort of hole in the future. That sounds like a net win for the Good Guys to me.

Reply

Oh, in case it isn’t clear enough in the article (though I know it is), the hacking you see in Ian Beer’s video *was conducted against his own phone*, which means that he had implicit permission to access data on it anyway. Just in case anyone thinks he actively worked on developing this exploit by deliberately attacking other people’s phones without permission.

Reply

Nvm i see now that there are comments from 2011 as well. So this is definately ancient history

Reply

No, that’s not correct. The earliest comment here was posted at 2020-12-02T18:40Z (each comment has a timestamp on it next to the name used by the commenter).
You will see a reference in a comment of mine (a comment written and published on 2020-12-03) *to an article that I wrote back in 2011*. But that comment is part of a short “side discussion” about corporate attitudes to cybersecurity on mobile devices, and how those attitudes have changed over the past decade or more. Article comments don’t have to stick to issues that are directly connected to the topic of the main article. That’s the beauty of comments and discussions on a website of this sort – you’re allowed to veer off onto related topics, or even onto vaguely related topics, because Naked Security is meant to be a community where readers can chime in with their own related news, opinions, questions and advice.
Beer’s research was done this year, against a recent version of iOS on a recent model of iPhone. (He used an iPhone 11 Pro, as far as I recall. It would be a *truly* amazing trick if he had done the research back in 2011 using a phone model that only came out in 2019.)

Reply

so to piggy back on the original story about this already patched vulnerability, you title the article in such a way as to bait hackers or to scare everyone else. congratulations.

Reply

To be fair, the story is not really about “an already patched vulnerability” – as Ian Beer pointed out himself, it’s about how security holes that sound far too arcane to be truly dangerous when you first hear about them could nevertheless end up causing serious trouble if not responsibly disclosed and fixed. (Very greatly simplified, if you will pardon my cliche, this is a tale of why cybersecurity is a journey and not a destination.)
Anyway, I’m not sure how the article I wrote can be said to “bait hackers”, and of the many people who have read this article and commented so far, you seem to be the first person who has actually been scared by it. Also, I think the term “piggy back”, which rather implies that I have tried to take improper advantage of someone else’s work, is a bit harsh. Sure, I didn’t do this research myself, but I have had more than once message saying words to the effect of “Thanks for this write-up, it helped me to tackle the full paper by Ian Beer.”
So, if you don’t mind, I am simply going to ignore the sarcasm in your comment and take the one-word sentence “congratulations” literally. Thanks!

Reply

“To be clear: Beer, via Google, did report the original bug promptly, and as far as we know no one else had figured it out before he did, so there is no suggestion that this bug was exploited by anyone in real life.”
That’s a lie. There is evidence Azimuth has been using this since at least 2018.

Reply

I said “as far as we know”, which is perfectly true, so your statement “that’s a lie” is as insulting as it is inaccurate. It is *possible* that Azimuth [an iPhone forensics and data extraction company], or any of its numerous competitors or fellow-travellers, or indeed any other iPhone researcher with the requisite decompilations skills, might have found this attack before Ian Beer did.
Beer himself dates the “leak” of the interesting kernel function name to 2018, and quotes a tweet from the co-founder of Azimuth Security, Mark Dowd, that shows that Dowd apparently noticed one of the bugs used in this attack when analysing the patches put out by Apple in the iOS 13.5 release in May 2020. (Beer mentions this as a reminder that reverse engineering patches after they are published is not only a good way of retroactively zooming in on an existing bug, because the changes draw attention to exactly what needed fixing, but also a popular pastime for threat researchers because bugs of a feather often flock together. That’s because programmers generally work on code in chunks, or modules, and therefore any bad coding habits that have affected any individual coder are more likely to be repeated in the chunks that person worked on than scattered randomly through the entire codebase.)
All of that, however, is a far cry from proving, or even giving any cause to suggest, that Azimuth knew about this entire attack “since at least 2018”.
Beer says “I have no evidence that these issues were exploited in the wild”, and I haven’t seen any evidence, either. If you want to claim that there is evidence – and I mean evidence, not just unsubstantiated claims or suggestions – then you need to declare it. Unless and until you do, then “as far as we know”, Beer was the first person to do this.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!