Device Fingerprinting on iOS Apps

Device fingerprinting is a technique that got popular at the end of the 90s by websites, to identify and track users. Apps have in contrast to websites often access to a much wider property range usable for fingerprinting. The unique device identification via fingerprinting across sandbox borders of multiple apps is a relevant privacy issue as it increase the possibilities for tracking users, even without requiring permissions. Many smartphone users are unaware of such practices and possibilities because such activities happen unnoticeably in the background.

Just like each person has an individual fingerprint, electronic devices are also uniquely identifiable. Through manufacturing tolerances, different hardware configurations, software properties and individual software configurability such devices become uniquely identifiable. Depending on the number and the variance of properties used, more or less unique fingerprints can be generated. Typically, several hard- and software device properties are combined through a hashing algorithm to an ideally unique bit sequence.

Motivations for app developers to include tracking libraries are: advertisement, bot detection, account takeover, spam, fraud detection and secure environment detection for payments. Mobile operating systems such as Android and iOS are aware of the user’s transparency through fingerprinting and take steps to empower the user to regain privacy whilst trying to maintain legitimate tracking objectives. The advertisement ID was introduced especially to target device tracking with a resettable ID provided by iOS. It allows advertisers to personalize ads and at the same time give the user options to reset it or hide it for individual apps. Also, Privacy Labels are added to the app store to better explain the app’s data usage to the user. Permissions and Privacy Labels should shed light on the usage of tracking, and it’s purpose during app install and usage. Apple AppStore’s licence agreement also states clear rules on data and identifier usage. However, as shown by Deng et al., many apps exist in the AppStore infringing such rules, working their way around permissions and being dishonest with their Privacy Labels. One increasing way to bypass such restrictions is to use fingerprinting techniques not requiring user consent.

Tracking Possibilities with Fingerprinting

The fingerprint generated by each app containing the same fingerprinting SDK are ideally the same on the same device. The user becomes very transparent by correlating the fingerprint with other data provided during app usage. As an example, one could think of a scenario, pictured in the following figure, where a shopping app, a search engine app and a navigation app all contain the same fingerprinting SDK. All user input of each app like recent purchases, search terms and the device location could be accumulated in the cloud under the same fingerprint. The data collection of each app alone is already a privacy risk to the user, but the combination of different app data makes users fully transparent. Such data is extremely valuable for example for advertisement companies to personally tailor advertisements for individual users.

Example of tracking possibilities with fingerprinting SDKs

Fingerprinting SDKs and Commonly used Properties

To gain insight into the current state of iOS fingerprinting and the latest techniques being used, we identified real iOS fingerprinting SDKs through systematic internet research and conducted manual static and dynamic analysis. The respective SDKs source-code was manually analysed with Ghidra and if possible, the SDK was tested in a minimal app construct. During the manual analysis we judged the SDK as fingerprinter or non-fingerprinter based on the observed behaviour and used properties. This leaves 13 SDKs that we classify as fingerprinters and further analyse in more detail. The SDKs are listed in the following table. We found that all 13 SDKs use the native iOS API to collect device characteristics. Through dynamic analysis of custom-built test apps, we have found that the collection of features shows up through temporal spikes. Most SDKs send the collected device characteristics to a server for further processing, except for OpenIDFA, DFID and DFIDSwift, which are non-commercial products. Additional passive fingerprinting may be performed on the server side. The SDKs that do not include server communication generate a hash on the device and return it. Nevertheless, we argue that fingerprinting is generally only useful when the fingerprint leaves the device. Therefore, it can be assumed that in these cases the caller of the fingerprinting framework handles the network communication. It is obvious from the following table that fingerprinting SDKs share some
collected properties, while other properties are exclusively queried by some SDKs. Very commonly used properties are: device model, identifier for vendor (similar to advertisement ID), device name, storage space properties, battery level and different locale identifiers. Other properties like device fonts, commonly used in web based fingerprinters, is rather seldom used. We suppose, due to the low entropy of this value on iOS devices. In conclusion, one can say that fingerprinters usually use multiple properties and don’t rely on a unique identifier provided through the advertisement ID in iOS.

Used device properties by different fingerprinting SDKs

Fingerprinting Access Pattern

We were able to observe access to common iOS APIs gathered in the previous step. Frida hooks were written to log access to respective APIs in a dynamic app analysis on a jailbroken iPhone X (iOS 14.4.2). In the next step, we analysed several top apps from the app store to see if fingerprinting can be observed based on API access pattern during execution.

Non-fingerprinting App

Firstly, let’s have a look at an example of an app without fingerprinting in the following figure. One can see that different properties were used during the app execution, some less, some more frequently. The accesses to the observed properties came from different caller modules (packages or libraries) from inside the app. Also, the properties were accessed throughout the whole runtime. Access to properties such as timezone and locale are typically required to properly display and run the app and totally legitimate.

Fingerprinting App

Now, let’s look at an app which we judge as fingerprinting in the following figure. One can see that from second 15 to 17, many properties are accessed in a very short timespan. Furthermore, the accesses come from the same caller module called ForterSDK, which is a known fingerprinting SDK. We observed such behaviour for all fingerprinting SDKs: Many properties are collected by the same caller module in a short time frame.

Fingerprinting Detection

As a proof of concept, we developed a fingerprinting detector to detect fingerprinting on the traces of a dynamic app analysis. The analyser is implemented as a sliding window of a certain width. The window of four seconds width slides over the event timeline and counts the number of accesses to known fingerprinting APIs per caller module. Access numbers above a certain threshold are then justified as fingerprinting. In a first test, we were able to correctly detect fingerprinting in 85% of the cases just by analysing access pattern. As further refinements, machine learning could improve accuracy as well as the inclusion of known fingerprinting SDK properties through a static analysis. Further insights on our used detection mechanism can be read in our extensive paper at ACM, published at the EICC conference 2023.

Conclusion

Fingerprinting as of today is used in many apps. Smartphone users are mostly unaware of the existence and the privacy issues related with such. This research lays the fundamentals to identify device fingerprinting in iOS apps. Future work will be to come up with remediation ideas to balance the privacy interests of smartphone users and fingerprinters.

Acknowledgements

This research work was supported by the National Research Center for Applied Cybersecurity ATHENE. ATHENE is funded jointly by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research and the Arts.

Note: Fingerprinting detection is currently work in progress and not (yet) part of the Appicaptor service.

Insecure Cryptography Usage: Tracing Cryptographic Agility in Android and iOS Apps

How has cryptography quality of the top 2000 Android and iOS applications evolved over the past three years? We show an overview of used hashing functions and symmetric encryption algorithms now and then. The results indicate that the majority of apps still use insecure cryptography.

Cryptography algorithms are applicable in many use cases such as for example encryption, hashing, signing. Cryptography has been used since centuries, some cryptography algorithms have been proven to be easily breakable (under certain configurations or conditions) and should thus be avoided. It is not easy for a developer with little cryptographic background to choose secure algorithms and configurations from the plenitude of options. Cryptographic agility is the ability exchange (insecure) cryptography algorithms with secure ones in computer programs.

Analysis Environment & Apps

The analysis results are based on the Appicaptor analysis results of the top 2000 Android and iOS apps. Appicaptor analyzed the current versions of the top 2000 apps along with the three-year-old counterpart of the respective apps. The apps are grouped into top apps from the top 2000 list and business apps uploaded or requested by Appicaptor customers.

Used Hashing Functions

Hashing functions such as MD5 and its predecessors as well as SHA1 are long known to be insecure and prone to collision attacks. It is advised by NIST to move to more secure alternatives like SHA224 or up to SHA512.

The used hashing functions in business apps and the top apps for iOS and Android were analyzed to see the current situation. Afterwards, the 2019 version of the apps is compared to the 2022 version to show the trend for cryptographic agility.

Percentage of apps using the specified hashing functions in top and business apps (2019 VS 2022)

Surprisingly, outdated SHA1 and MD5 hashing is still found in 70% to 80% of the analyzed apps and thus the most used hashing algorithms in iOS and Android in both, the top and business app groups. Even the long outdated MD2 algorithm is still used in 5% to 10% of all apps. These are alarming news regarding security. SHA256 is the only used, yet secure algorithm which is as widespread as MD5 and SHA1.

Comparing hashing in top and business apps, one can see that business apps use less hashing functionality in general.

Looking at the evolution of the top apps on Android and iOS from 2019 to 2022, one can see that the usage of MD5 and SHA1 mostly remains constant with only slight variations. On Android, the SHA2 family and especially SHA512 usage increased. In the case of SHA512, the usage in apps doubled, which is at first sight a positive trend. However, since the usage of outdated algorithms remains constant, one must say, that only more hashing algorithms are used and secure algorithms are not replacing the outdated ones. On iOS, the situation is vice versa: The usage of the SHA2 family even declines which leads to the assumption that less hashing is used on iOS.

In conclusion, one can say that even though Android developers embracing the SHA2 family, outdated hashing functions constantly and heavily remain in Android and iOS apps.

Used Symmetric Encryption Algorithms

As one would expect, AES is the most widely used symmetric encryption algorithm. DES and 3DES are used in around 3% of the analyzed apps. One exception is DES in Android which still seems very popular with a usage in around 12% of the tested apps. Throughout the years, DES and 3DES usage remains mostly constant. However, looking at AES usage over time, one can see that the usage in Android increases in the latest app versions, while at the same time the AES encryption in iOS decreases.
Especially on Android, a discrepancy between business and top apps becomes obvious. Business apps seem to use less AES and slightly more DES encryption.

Percentage of apps using the specified encryption functions in top and business apps (2019 VS 2022)

Usage AES in Insecure ECB Mode

Usage of the ECB mode is a very common weakness when applying cryptography. ECB mode outputs the same ciphertext for the same plaintext (when the same key is used). This means that pattern are not hidden very well and one could draw conclusions on the plaintext. With other techniques like CBC or CTR mode, succeeding block’s encryption depend on one another, which introduces randomness and hides pattern.
We are aware that under certain conditions the usage of ECB mode is fine, but we advise against using it since secure conditions might easily become insecure during app upgrades, code restructuring or new requirements.

We visualize the usage of insecure ECB mode versus other modes. In the visualization we lay focus on the explicit transition of used secure and insecure modes from 2019 to 2022.

ECB mode usage transitions from 2019 to 2022

Looking at the transitions for Android top apps, we see that 48.9% ECB mode usage in 2019 shrinks to 32.3% in 2022, which is very positive. One can also see that many apps shift from ECB mode to other secure modes. However, a small percentage of apps used secure cryptography in 2019, are now using ECB in 2022.

The situation on iOS looks much different. The transition diagram shows that out of the top apps on iOS, only 12% use ECB mode in 2019 and the majority uses secure alternatives. However, after three years, things didn’t turn out well for iOS apps. With 16% for top apps and 12% on business apps in 2022, more apps are using insecure ECB mode compared to 2019. Even though numbers increased on iOS, ECB usage on Android is still far more widespread, but decreases.

Causes of ECB mode usage

From all observations, we find the transitions from secure (non-ECB) to insecure (ECB) cryptography and vice versa very interesting. Understanding reasons for the transitions could give hints on how developers could be lead towards better cryptography standards. The transitions from ECB to non-ECB and non-ECB to ECB is significantly strong in Android top apps.

Triggers for a change from secure encryption (non-ECB) to insecure encryption (ECB) on Android and iOS

A more in-depth analysis of these apps reveals that in around 90% of the cases, the transition from ECB or towards ECB is triggered by an included third-party library. On iOS in 70% of the cases the transition is triggered through third-party libraries and in 30% of the cases through code changes of the app developer.

Libraries triggering transition insecure cryptography (non-ECB) to secure cryptography (ECB) on Android

Different Android third-party libraries which cause the transition from an insecure to a secure cryptography mode and vice versa were analyzed. The transition from ECB to non-ECB is pretty clear, 98% of the apps discontinued using ECB due to not using Google GMS Advertisement library anymore. In 2% of the apps, the respective library was not identifiable due to obfuscation. The pie charts also show which libraries triggered the ECB usage in 2022 apps. Leading with 29% is Google GMS Advertisement library followed by Icelink (12%), Microsoft Identity (10%) and Apache Commons (10%). Respective apps were deeper analyzed, to see if ECB was introduced through a third-party library update or just by adding a new third-party library with ECB usage. In fact, in 92% of the cases libraries with ECB usage were added and only in 8% of the cases a third-party library update introduced ECB.

Conclusion

This analysis has proven that the majority of apps still use insecure cryptography. The trend over the past years unfortunately shows no significant drift towards secure algorithms on the broad front. Some single aspects like ECB usage on Android point into the right direction. The detailed analysis in finding causes of the ECB usage on Android showed that this flaw is mostly introduced through the usage of third-party libraries during app development.

Sources

The contents of this blog post is a condensed version of the award-winning paper published at the international ICISSP 2023 Conference. The full paper can be viewed at Scitepress.

Visit us on it-sa 2022

What data is transferred by business apps and how secure is their processing? Our research shows: If your employees use apps arbitrarily, you put your company’s security at risk.

At it-sa 2022, we present our app analysis framework Appicaptor, which you can use to automatically check whether apps are compliant with your company’s IT security demands. New, within the BMBF funded research project PANDERAM developed methods will complement Appicaptor. Among other things, the goal is to identify, evaluate and visualize complex data flows from automated dynamic analyses.

Excerpt dynamic data flow analysis created in BMBF project PANDERAM: Location and advertisementID information flow to third party providers using granted permissions for a weather app

One focus of the ongoing PANDERAM project is to develop an analysis platform using a lightweight approach for automatic data collection from large app sets in order to provide users with information about the IT security quality and privacy of their apps. The technical basis is built upon dynamic analysis of app security properties by applying hooking techniques within a custom-built runtime environment.

Using this approach, the IT security quality of apps is automatically evaluated, including issues that go beyond the aspects observable at communication level such as lack of encryption in local data storage or usage of weak cryptography. By hooking system functions, the evaluation environment also detects access to system resources such as memory card, calendar, contacts, etc. The dynamic approach allows TLS pinning to be switched off, so that transmitted data can be read and evaluated depending on their triggering factors. Furthermore, the analysis platform includes unsupervised operation and usage of apps, which autonomously recognizes specific app operation concepts for the analysis of a large functional scope of apps. Particularly important here are approaches for dealing with login fields, recognition of navigation elements and other interactive elements that must be correctly recognized and operated at app startup in order to enable app functionality.

The example of an Android weather app above shows a first visualization of the analyzed data flows. Highlighted are the location and advertisementID information that are transmitted to third parties when the app is granted the required permissions. For weather apps the example demonstrates the problem, that the user wants to share the location to retrieve the local weather. However, all included third-party libraries get the permission to access and transmit the location information for their purposes as well. Which might not be in the interest of the information owner. Fraunhofer SIT’s Appicaptor specific data flow analysis methods will evaluate the transition of corporate or business data to external parties on that concept.

You’ll find us in hall 6, booth number 6-210 for a demonstration.

Content Security Policy – Important Defense-in-Depth missing for many Apps

The Content Security Policy (CSP) defines restrictions for webviews to reduce the attack surface of applications for Cross Site Scripting (XSS) and other attacks. The stricter the policy is configured, the fewer possibilities remain for attackers to inject malicious functionality in case of input validation flaws. This is especially important for hybrid apps, such those build with Cordova, that use JavaScript to access granted OS functionality via a JavaScript bridge. If an attacker can inject own code to a hybrid app’s webview, he can alter the way how the app uses accessible data and sensors.

A strict CSP can prevent the execution of injected functionality by restricting the executable code fragments to resources that are less accessible to an attacker. However, strict restrictions also require alternative programming styles during app development, which sometimes are inconvenient or unknown to the developer. Additionally, some external libraries or other existing code may not work out of the box with restrictive CSP settings, increasing the pressure for using a less restrictive policy.

With Appicaptor we inspect apps for configured CSPs to evaluate the restrictions for the security of the app. A single missing sanitation of user input may render a hybrid app vulnerable for hijacking of the app’s functionality in a malicious way. As the vulnerabilities can be introduced to apps by a missing ‘S’ for the HTTPS scheme when loading external JavaScript resources or by a flawed sanitation of user input, hybrid apps should reduce these risks by applying a ready to use second line of defense using CSPs. However, 60.5% of the analyzed hybrid apps for iOS and 72.3% of the hybrid apps for Android do not define a CSP. The analyzed app set each consists of the most used 2,000 apps as ranked by the Android and iOS App Stores.

One might consider a ratio of more than 25% for hybrid apps that do use CSP as a second line of defense an already good starting point. However, analyzing the actual CSPs of these apps shows that many policies are weak. In iOS 39.5% (Android: 27.6%) of the hybrid apps with a CSP deactivate important protective restrictions and therefore could be rated equally to apps with no defined CSP from the perspective of an attacker. So let’s see why.

This is a typical basic example of an observed CSP:

<meta http-equiv="Content-Security-Policy" content="default-src 'self' data: 'unsafe-inline'; img-src *;"> 

CSPs are constructed in a simple fashion: Each section starts with the source keyword the directive should be applied to and is terminated with a semicolon. In the example the CSP starts a directive section with the default-src keyword. Its declared restrictions are used as fallback for other, not defined directives. For example, the directive for script-src and object-src are not declared, so the configuration of 'unsafe-inline' is applied to both of them, but not to 'img-src', as it is declared in the example and therefore the fallback to 'default-src' is not applied by the browser.

The intention here is to allow loading sources only from 'self', which is the source where the HTML page with the CSP entry was loaded from, to prevent injection of external scripts for Cross Site Scripting. Only images are allowed to be loaded from any source, declared by star character in the img-src directive.

However, by declaring the keyword 'unsafe-inline', this CSP allows executing JavaScript code injected at any place in an HTML page, such as

User entered <script>doSomethingBad();</script> text 

or

<div onclick="doSomethingBad();">Click Me</div>

Allowing 'unsafe-inline' together with the data: scheme for the object-src in the example allows an attacker to inject script code to <object>, <embed>, and <applet> elements:

<object data="data:text/html,<script>doSomethingBad();</script>"></object>

or by using JavaScript inside an SVG, embed as data: URL:

<embed src="data:image/svg+xml,<svg onload='doSomethingBad();' xmlns='http://www.w3.org/2000/svg'></svg>" type="image/svg+xml" width="1" height="1" /> 

Fortunately, in most common cases the code injected via data: scheme is treated as a separate frame. It cannot interact with the JavaScript bridge of a hybrid app, which is located in the parent index.html page and the Same Origin Policy prevents the access to it. However, attackers can still use this for UI manipulation attacks by tricking users to disclose credentials or other critical data in crafted dialogs.

Besides many other possible shortcomings observed CSP are having regarding inline XSS protection, about 10% of the analyzed hybrid apps (iOS / Android) do not properly restrict the sources for loading scripts. This is caused by using the star character as a wildcard for the scheme. This way an attacker would be able to inject a script element that can load malicious code from any domain. The same applies if instead the scheme https: is used. It seems that this configuration option is too confusing for developers as for some it might look like as if this configuration would just prevent HTTP access. However, if used it allows access to any domain via HTTPS. So, when some developers use the https: together with a list of domain names they want to allow, this domain list does not have any effect, as any domains are already allowed:

default-src 'self' data: https: ssl.gstatic.com maps.google.com;

Such ineffective domain restrictions were observed in about 7% (iOS / Android) of the analyzed hybrid apps, which might give a false sense of security. Instead, the developer would need to specify the protocol together with the scheme (e.g. https://ssl.gstatic.com) to restrict access to listed domains and allow this access only via HTTPS.

Unfortunatly, in about 5% (iOS / Android) of the analyzed hybrid apps also parsing errors were detected that can prevent the intended protection. In general, such parsing errors could lead to more strict or less strict restrictions, depending on which part of the CSP is affected. However, as more strict policies that are accidentally created are more likely to cause functional issues than in the case of accidentally created less strict policies, the chance is much higher that those issues are detected in functional testing. For example, we observed parsing errors that prevent some or all directives from being applied. In all these cases, the errors lead to a less strict CSP, e.g. by omitting a good strict default-src directive that is now missing for a secure fallback for other non-specified directives.

CSPs can have a strong impact on app security. They are especially important for hybrid apps. The analysis results show that it is important to check if apps do use a CSP and that the CSP needs to be evaluated carefully. In case of doubt, developers should check the CSP with a free tool such as the Google CSP Evaluator to better understand the impact of the directives and to prevent parsing flaws.

Concerns about Apache Log4j in Android Apps

The recently published CVEs for the Apache Log4j Java logging library raise the question if also Android apps suffer from the same fatal exploitability, like the huge number of server and desktop applications.

In a first response, we checked the presence of Apache Log4j in the Appicaptor monitored 2,000 most popular Android apps and in the set of apps scanned by our Appicaptor customers to detect which Log4j versions are contained. Our analysis shows that currently less than 1% of the apps contained Log4j 1.x and none of them contained Log4j 2.x.

Regarding the current JNDI CVE-2021-4104, CVE-2021-44228, CVE-2021-45046 and CVE-2021-44832 a further positive news is: Our manual tests proved that the required classes for a JNDI lookup are not available in Android. So, even if an app contains a Log4j version that is vulnerable to JNDI lookups and an attacker manages to trigger a malicious JNDI lookup, the app would not be able to perform the JNDI lookup and -at most- could only crash with a ClassNotFoundException. Therefore, no remote code execution in possible in this case.

An older vulnerability, tracked by CVE-2019-17571, is related to a server socket for receiving log messages. The SocketServer class in Log4j 1.x is vulnerable to deserialization of untrusted data, which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget while listening to untrusted network traffic for log data. The additionally tracked CVE-2020-9488 for Log4j 1.x describes an improper validation of certificates with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. However, usage of both aforementioned functionality is uncommon in released mobile apps so that the overall risk of an exploitation is considered low.

Nevertheless, as Log4j 1.x is not maintained anymore, it should be replaced with an alternative and actively maintained logging library that does not contain known vulnerabilities.

iOS App Tracking Transparency – Adoption Rate of App Implementations

With Apple’s release of iOS 14.5 at the end of April, iOS app developers are required to request permission in order to track their users beyond the app’s border. While there are already complaints about the low opt-in rate published by the app analytics company Flurry, which are updated weekly, we were curious to see how the overall adoption rate for app implementations would look like.

Developers need to provide a tracking description in the apps info.plist file – together with localized versions in file InfoPlist.strings of language’s project directory – called NSUserTrackingUsageDescription that informs the user why an app is requesting permission to use data for tracking the user or the device. However, in 57.3 % of the current German Top 2000 iOS apps no tracking description was provided by the developers although the app contains tracking code. On devices with iOS 14.5 or later this causes iOS to deny the request for access to the identifier for advertisers (IDFA). So the opt-in rate of users could be higher if those 57.3% of the developers would have provided a description, so that the user is at least presented with a decision dialog.

Property %
No tracking description included but tracking code detected57.3 %
Tracking description included and tracking code detected33.5 %
No tracking description included and no tracking code detected9.0 %
Tracking description included but no tracking code detected0.2 %
Evaluation of Tracking Descriptions in German Top 2000 iOS Apps of all Categories except Games and Stickers (Appicaptor, July 2021)

Another effect we observed is a missing individualization and usefullness of the description. The table below lists the Top 10 used descriptions in the Top 2000 Apps, with 77 Apps just repeating the example text provided by Apple. Other just include placeholder text, such as “YOUR TEXT”, “NSUserTrackingUsageDescription”, “none” or even “-“.

Description #
This identifier will be used to deliver personalized ads to you.77
Your data will be used to deliver personalized ads to you.9
Dadurch können wir Ihnen relevantere Werbung anzeigen, ohne deren Anzahl zu erhöhen.7
Dies wird verwendet, um den Dienst zu identifizieren, der dich weitergeleitet hat, um die ein individuelles Erlebnis zu bieten.6
Diese Kennung wird verwendet, um Ihnen personalisierte Anzeigen zu liefern.6
Ihre Daten werden verwendet, um Ihnen personalisierte Werbung zu zeigen.6
Your data will be used to deliver personalized ads.5
Datenerhebung zur Verbesserung der App und für Werbezwecke zulassen4
Deine Aktivitäten werden verwendet, um Dein Nutzererlebnis und Werbung zu personalisieren.4
Mithilfe dieser ID können wir Dir für Dich ausgewählte Werbung anzeigen.4
Top 10 Tracking Descriptions by Occurence in German Top 2000 iOS Apps of all Categories except Games and Stickers (Appicaptor, July 2021)

This leaves us with the impression, that creating a fitting description is currently only deamed important for 1/3 of the developers. Obviously the motivation depends on the benefits the developers gain from providing a tracking description.

There are many business cases which rely on or have a benefit from cross-app user tracking. The players of these business cases (e.g., ad providers and app developers who generate ad revenue) have an interest to achieve high opt-in rates. Fitting or at least reasonable descriptions for the permission dialogue will be the key for broad acceptance rates. The current evaluation shows that (1) only a minority of apps have at least a description included and (2) that they are very unspecific.

But there are use cases which currently integrate cross-app user tracking, however it does not have a beneficial effect for the using party. For example, this is the case when an app developer integrates a runtime diagnostic library. As he is only interested in the telematic data of his app, cross-app tracking would not be his interest and for that reason he may not include the description for the permission dialogue. In this case Apple’s initiative would help to reduce user tracking from companies that provide a runtime diagnostic services with a business model of selling retrieved analytics data sets to third parties or similar use cases.