Skip to main content
Mactans - Injecting Malware into iOS Devices via Malicious Chargers

Mactans - Injecting Malware into iOS Devices via Malicious Chargers

At Black Hat 2013, scientists from Georgia Institute of Technology dissect iOS security model and present Mactans, a proof-of-concept iOS attack via chargers.

Billy Lau: Good evening, ladies and gentlemen! It’s wonderful to be here today. My name is Billy Lau. I am a research scientist from Georgia Institute of Technology. Today me and my colleagues Yeongjin Jang and Chengyu Song are going to be presenting our work entitled “Mactans: Injecting Malware into iOS Devices via Malicious Chargers”. Mactans, basically, encapsulates a small portion of our findings in a larger mobile security research that is done in Georgia Tech Information Security Center, better known as GTISC.

So, where do we get started? Here is the agenda for today. The talk will be partitioned into three sections, beginning with iOS security. In this section I will talk about the overview of iOS security and a deeper analysis of what it is. Then we will go into the introduction of Mactans that is a proof-of-concept attack that we’ve developed which allows us to install malicious applications. And then we’ll end with discussions, beginning from the observations that we made during the process of research, and also the potential countermeasures that we have proposed and the ones we have seen implemented in the newest release of iOS 7 beta.

Now let’s start with the overview of iOS security. Before I go further, let me get started with a few terms that I’ll be using throughout. So, I’ll be using the term “apps” to refer to applications, programs that users install on their iPhones or other “iDevices”. I’ll be using the term “iDevices” instead of particularly iPhones or iPads because it’s a more general term. Now, in the research of iOS security we deal with a lot of assumptions, challenging these assumptions as we find them. The question today is – how secure are iOS devices? And how do we relate these assumptions to the average user, in particular activities like answering the phone, making calls or sending SMS, to very mundane activities like even charging the phone, which every user must do, we presume? And then we raise the question of other ways to challenge the security assumptions, besides jailbreaking the devices. I want you to keep these questions in mind, and hopefully you’ll find the answers to these questions by the end of our talk.

We begin our analysis with the Apple App Store. In a nutshell, the Apple App Store is a very critical piece in this whole ecosystem of iOS security. The reason is that it helps Apple to enforce this model called “the walled garden model”. What this model means is that no arbitrary person should be able to install an arbitrary app on any arbitrary iDevice. In this context, Apple App Store is very important. From a developer’s point of view, it is the only platform to publish apps. From a user’s point of view, it is the only place to purchase or download apps. Needless to say, the Apple App Store is completely owned and controlled by Apple. All apps that are submitted must be reviewed by Apple prior to release, but the caveat is that even though the app may have gone through an initial release approval it could still be removed from the Store retroactively should Apple find this app to violate the policy sometime in the future.

With this, we raise another question: how does Apple enforce this policy that no arbitrary person should be able to download an arbitrary app and run it? What if I as a user want to download an app and run it? The simple answer to that is through something called the mandatory code signing process. Basically, the code signing process enforces the integrity all the way from hardware, the device itself – to going up in the software stack, to the bootloader, the operating system, and eventually the apps. This means that only apps that have correct digital signatures can be installed and executed in iDevices. The question is: who can sign the apps? No doubt, of course, only Apple can sign this. But during our process of research we discovered that there is a potential channel which can be exploited, and this is the other entity who can actually sign apps – the iOS developers themselves.

Now, who can become iOS developers? Again, during our research process what we did was we went to Apple’s development portal and requested to be a developer. We submitted our credentials, names, addresses, and credit card numbers. Then we were billed, I think it’s $99; and a few hours after that we got approved - and voila, I’m already an iOS developer! What this means, as a consequence, is that now I am able to sign any arbitrary app and then run the arbitrary app on any iDevice. So I want you to keep this point in mind as we continue to review the Apple walled garden model.

So, conventionally, when a developer is done developing his app he will submit it to the App Store and it will go through a review process. During this review process, Apple will attempt to determine whether or not this app passed or failed based on many criteria, rules and regulations. Surely, the question is: what are the rules? There is an official list that Apple publishes on guidelines about what constitutes good or approved apps; it is completely open for interpretation at Apple’s discretion. The best we can do is examine, based on our experience, the apps that have been previously banned or rejected from the App Store.

What we found out is that apps that are making use of private APIs are rejected and banned. Also, these rules are probably changing very regularly. In a more technical sense, what happens during the app review is we think Apple deploys static analysis to check whether or not these private APIs are called in your app, and also deploys some manual testing, where a real user installs the app and actually clicks through; and through a very empirical process they will then approve or reject the app.

If your app manages to pass the review process it can be published through the App Store. Any user then can go to the App Store by launching the App Store app from his or her iDevice and search for the associated app that they want. However, after installation, during the execution the app that is installed is still confined in some way. This confinement is called the iOS Sandbox. Basically, the iOS sandbox provides two types of isolation: #1 – process isolation; #2 – filesystem isolation.

What is process isolation? It’s when, for example, a particular app A is not allowed to read any other app’s memory region. And it also cannot talk to any other process using the traditional IPC-like APIs. So, as you can see, this way the intercommunication for an app is very limited. As for filesystem isolation, the protection it brings is that if I have app A that saves a particular file onto disk, any other app installed on the iDevice cannot read or it doesn’t even know about the existence of this file. There are some caveats: although there is a certain region in the iOS filesystem that is publicly readable, it is strictly read-only. This means no modifications, there does not exist a communication channel. So, from the standpoint of the iOS Sandbox, it provides the protection where even an app that’s installed cannot easily attack another app on the system. This is in contrary to traditional platforms like PCs where such attacks can happen more easily.

There’s another interesting aspect of iOS runtime in the Sandbox is that iOS runtime performs entitlement check. What are entitlements? You can view entitlements as special privileges or permissions to assess certain sensitive resources. In this context, examples of entitlements are: if your app wants access to iCloud, if your app wants to do push notifications or change passcode. iOS strictly enforces this app entitlement during runtime. If you do not have this declared and approved, you don’t have such entitlement.

With this knowledge in mind now, we are ready to go back to the question we posed earlier on the effectiveness of Apple’s walled garden model. The walled garden model is assumed to be secure because every app is carefully vetted before it is being released and downloaded, and therefore it is safe, right? To its credit, compared to Android, there are almost no in-the-wild malware instances for iOS that we know of. However, do keep in mind about the additional channel that I’ve mentioned just now when talking about iOS developers – this basically creates a channel for iOS developers to sideload any apps onto an iDevice. And this is exactly what the idea of Mactans is about.

Read next part: Injecting Malware into iOS Devices via Malicious Chargers 2 - Overview of the Mactans Attack

0

Was this article helpful? Please, rate this.