What data is transferred by business apps, and how secure is their processing? Our research shows: If your employees use apps arbitrarily, you put your company’s security at risk.
At it-sa 2024, we present our app analysis framework Appicaptor. You can use it to automatically check whether apps are compliant with your company’s IT security demands. New results within the Athene funded research project FiDeReMA will improve Appicaptor analysis techniques. Among other things, the goal is to identify and evaluate privacy implications when using arbitrary apps.
Our current efforts to improve Appicaptor revolve around different privacy aspects:
Device fingerprinting
App DSGVO Consent Banners
Permission piggybacking of Android in app third-party libraries
Device Fingerprinting
Device fingerprinting is a technique used to uniquely identify devices and therewith typically also users. Mobile apps make use of different device properties such for example device name, software version and others. They combine such values mostly with a hashing function to a unique identifier. Use cases for device fingerprinting are app usage statistics, fraud detection and mostly targeted advertisement.
When employees install apps with device fingerprinting on their mobile corporate devices, it can lead to the loss of sensitive business data for companies. Attackers can acquire the collected data and potentially identify the devices of company management, spy on trade secrets, and ascertain customer contacts. In practice, the Cambridge Analytica case shows how sufficient data from various sources can be used to analyze users and manipulate them through targeted advertising, thereby influencing even election results.
Our analysis results show that more than 60% of the top 1,000 apps on Google Play employ device fingerprinting techniques. This allows a unique device identification even across app borders.
We extracted 30,000 domains from the most popular 2,000 iOS and Android apps. Afterwards, we filtered out domain names for the most prevalent device fingerprinters covering 90% market share. We were able to prevent device fingerprinting in 40% of the cases by using publicly available domain blocklists. These lists are specialized on tracking and advertisement domains and can be easily applied to for example a firewall. Another 40% would easily be blockable by updating the blocklist: Some fingerprinters use random subdomains for their communication in the form of https://8726481.iFingerprinter.org giving many possibilities to be blocked.
We also proposed an approach to randomize return values of popular properties used for device fingerprinting. Our results prove that this technique dramatically reduces the uniqueness of the device fingerprint. Nevertheless, respective APIs remain accurate enough for their intended use cases. The hope is that our proposed techniques are included in future Android and iOS releases.
In-App DSGVO Consent Banners
Many of the previously mentioned device fingerprinting properties would typically require user consent as stated by the DSGVO. According to recent court decisions and EU regulations, rejecting must be just as easy as accepting consent banners. However, many companies don’t obey such regulations. Those who obey mostly apply dark pattern to push the user into accepting all tracking.
We are currently working on detecting if apps provide a reject option on first instance, without having to click through several submenus. In our attempt, we are employing artificial intelligence to analyse such consent banners for reject options on first instance. The results yield a success rate of 82%. Some apps try hard to remain within legal boundaries but make it hard for users to reject data usage. Can you find the first level reject option in the following consent banner? – Yes there is an option.
We have collected the most interesting consent banners and bundled them into a game. With the app “Reject all cookies” you need to overcome several consent banners and keep your data private. Having rejected all “cookies” in the app you can win real cookies at our it-sa 2024 booth (6-314).
Permission piggybacking in Android Apps
Traditionally, Android apps comprise a main application and several supporting libraries. These libraries often inherit permissions from the main app, granting them unnecessary access beyond their core functions. This leads to permission piggybacking, where libraries exploit inherited permissions to gather data without requesting them directly. Advertisement and tracking libraries particularly leverage this tactic to collect extensive user data.
We have developed an analysis tool to detect third-party app libraries only probing for already granted permissions, without ever requesting permissions themselves.
Analysing the top 1,000 apps on Google Play reveals that 50% of the libraries exhibit this permission probing behaviour. Presumably, these libraries then adapt their behaviour according to the granted permissions of the main app. Accordingly, they collect more or less data, as available. Most of the identified libraries exhibiting such behaviour are well-known advertising and tracking libraries. This fact underlines the urge for a finer granular permission system to separate main app and third-party libraries.
Visit us at it-sa 2024 and have a cookie with us!
You’ll find us in hall 6, booth number 6-314 for a demonstration and discussions.
Since May 2024, the inclusion of an iOS Privacy Manifest has been a requirement for app submissions with newly added third-party SDKs. We analyzed first results about data collection practices, compliance issues with Apple’s guidelines, and privacy risks posed by SDK providers.
Apple mandates that all app submissions with specific newly added third-party SDKs have to include a comprehensive iOS Privacy Manifest. This manifest outlines the data collection and usage practices of apps, providing users with transparency and empowering them to make informed decisions about their digital privacy. As outlined in our previous blog post, Appicaptor correlates the privacy manifest contents with additional privacy findings through static analysis.
In our evaluation, we analyzed the iOS Privacy Manifests definitions of our app sample set, that consists of the most popular 2,000 free iOS apps available on the German Apple App Store in June 2024.
Apple mandates that apps with newly added third-party software development kits (SDK) of one of the 86 listed third-party SDKs have to include a iOS Privacy Manifest (see Apple Developer Support). Our analysis of the app sample set revealed the presence of all 86 SDKs in at least one app. We evaluated whether apps integrating these SDKs adhered to the privacy manifest requirement: Our findings indicate that only 47 out of the 86 SDKs were associated with at least one app that defined a privacy manifest. Conversely, the remaining 39 SDKs were utilized at least within one app within the sample set, but no app employing these SDKs defined a privacy manifest. This discrepancy may stem from the following factors: either the SDK developers may have not provided a privacy manifest template for their SDKs, or the app developers have not incorporated SDK versions that include privacy manifest template or explicitly deleted the manifest entry, or the app update does not include a newly added third-party SDK from the list. So our analysis can only investigate the Privacy Manifest declarations of about half of the libraries that Apple has selected. However, these first results already provide interesting insights in the data analytics industry.
What data is requested in real apps and why?
As first evaluation on the manifest content, we evaluated the data types declared within the iOS Privacy Manifests of our app sample set. Our analysis revealed that 50% of the apps declare in the manifest that they collect data types such as OtherDiagnosticData, CrashData, and DeviceID. Especially the data type DeviceID is privacy-sensitive as it provides the possibility to correlate actions of users across the sandbox boundaries of individual apps.
The full list of collected data types and the number of apps in which such data is collected according to the privacy manifest is given in the following graph. It can be seen, that a wide range of data types are defined in the app set, including very sensitive data types as Health (Health and medical data), PhysicalAddress, PreciseLocation and SensitiveInfo (racial or ethnic data, sexual orientation, pregnancy or childbirth information, disability, religious or philosophical beliefs, trade union membership, political opinion, genetic information, or biometric data).
Although Apple clearly specifies how data type classes should be defined in the iOS Privacy Manifest, we found 205 apps within our app sample set that included malformed or self-defined data types in iOS Privacy Manifests. Consequently, these do not align with Apple’s specified requirements and the discrepancy highlights issues in the implementation and adherence to Apple’s guidelines.
Following the evaluation of data types, we proceeded to assess the given purposes. The next graph presents the distribution of purposes defined in the iOS Privacy Manifests per app. Our analysis indicates that collecting data is declared for the purposes of AppFunctionality or Analytics in over 60% of the apps of the sample set. Consistent with our findings in the data type evaluation, we observed that a substantial number of apps (235 apps) which included purpose definitions deviate from Apple’s expected values. These are summarizes in the category Malformed / Self-defined Purpose in the chart below.
Do the declared data type and their reasoning align?
Further analysis was conducted to determine which data types are collected for which specific purpose and in how many apps of the app sample set this occurs. For this evaluation, we analyzed the iOS Privacy Manifest definitions of the 2,000 apps in our sample set, focusing on tuples of data types and associated purposes. The following graphic illustrates which data types are accessed for which purposes. The size of the circles corresponds to the number of apps in which that specific data type-purpose tuple was defined in the iOS Privacy Manifest. It is interesting to see, that collecting certain sensitive data types such as SensitiveInfo, PhysicalAddress, PreciseLocation and Health are declared for purposes besides AppFunctionality and almost all data types are used also for the purpose of Analytics, which could have an effect on user’s privacy.
Are there component providers that stand out in terms of data type and usage?
Data types and purposes are defined for specific components in the iOS Privacy Manifest. For that reason, our next focus was to investigate if specific purposes to collected data types are specific to certain component providers. Therefore, the following analysis examines the purposes for collected data types grouped for each component provider within the iOS Privacy Manifest. The following graph highlights the relationship between purposes and the SDK provider. To do so, we associated the providers to their SDKs and extracted the 20 most frequently contained SDK providers in the Privacy Manifests within the evaluated app set. We analyzed in how many apps each purpose is defined for these top 20 providers within the app set. In the diagram it can be seen that certain providers, such as Firebase, concentrate on a few specific and targeted purposes. In contrast, others, like Adjust, request data for a wide array of purposes. As the purposes relate to the activities of the SDK providers, this can be seen as summary on the functionality aspects provided by the SDK components and their providers.
Similar to the evaluation of the specific purposes related to component providers, we expanded the view to cluster the information according to the data types accessed. To do so, we again took the 20 most frequently mentioned SDK component providers in the Privacy Manifests within the evaluated app set and counted how many apps declare a data type and purpose tuple for these top 20 component providers. In the following graph the size of the circles in the graph corresponds to the number of apps in which each specific data type-purpose tuple was declared in the iOS Privacy Manifest. Like the former evaluation, this graph shows, that certain component providers (like Firebase) focus on certain data type and purpose tuples, whereas other component providers define various data type and purpose tuples (e.g, Google and Facebook).
The data type definition may additionally contain boolean processing flags, that should specify external usage of the data. The processing flag linkedspecifies that the data type is linked to the user’s identity, whereas trackedspecifies that the data type is used to track users. Our final evaluation of the Privacy Manifest data of the app sample set focus on the aspect whether certain data providers specify data types with these processing flags or not.
Data types flagged with tracked can be shared with a data broker. Therefore, if the processing flag tracked or linked/tracked is set, this may threaten the user’s privacy significantly. Therefore, we examined what processing flags are set by different component providers for requested data types. We took the ten most frequently mentioned SDK component providers in the Privacy Manifests within the evaluated app set and analyzed how many apps declare a data type, processing type and purpose tuple for these top ten component providers. The size of the circles in the following graph corresponds to the number of apps in which each specific data type-processing flag-purpose tuple was defined in the iOS Privacy Manifest. The graph groups the data types in relation to the requested purpose in circle groups. The circle group’s label states which processing flags are set to true: If the circle group is labeled with linked, then only the data type is linked to the user’s identity. If circle group is labeled with linked/tracked, then the data type is linked to the user’s identity, and it is used to track users. Elements of a circle group that has no label are neither linked nor tracked.
A significant difference in the processing flag usage can be seen when comparing the results for Google and Facebook. Google defines the processing flag linked or no processing flag for most data types and purposes. In contrast, Facebook sets the processing flag with the most possible extent linked/tracked for most data types and purposes.
Conclusion
The analysis of iOS Privacy Manifests for the 2,000 most popular free iOS apps on the German Apple App Store in June 2024 reveals several insights about data collection practices and compliance with Apple’s guidelines.
About half of the Privacy Manifests in apps have declared to collect data type OtherDiagnosticData, CrashData or DeviceID, with various sensitive data types also being collected. However, the presence of apps with malformed or self-defined data types indicates inconsistencies in adhering to Apple’s guidelines. Additionally, the observed entries for data collection purposes were found to violate Apple’s specification. This undermines the effectiveness of the privacy manifest and hopefully will be addressed with checks by Apple during the app review process.
However, even in this preliminary analysis with a lot of data missing for SDKs, the benefit of the Privacy Manifests can be seen. This way, it is possible to inspect the relations between collected data types, purposes and components, showing that in some apps sensitive data types are collected for purposes not related to the app’s functionality.
The examination of specific data type-purpose tuples and their association with component providers revealed that certain SDK providers, focus on targeted purposes, while others, request data for a broader range of purposes. Notably, the processing flags for data types, particularly those flagged as tracked, pose significant privacy risks. The contrast between providers, which primarily uses the linked flag, and those which extensively use the linked/tracked flag, may underscore the varying levels of privacy impact across different SDK providers.
Staying informed about how apps collect and use your data is crucial in digital privacy. Addressing this, Appicaptor finds privacy-related app issues and extracts contents of the iOS Privacy Manifest. All information is correlated with results from static and dynamic analysis to provide enhanced privacy insights.
In today’s interconnected digital landscape concerns related to data privacy are in focus. App developers, platform providers and smartphone OS manufacturers face pressure to prioritize user privacy and transparency. In response to these concerns Apple has introduced an initiative: the iOS Privacy Manifest. This feature serves as a documentation how apps access, utilize and transmit privacy-related data. Appicaptor uses the information from the iOS Privacy Manifest, enabling users and companies to make well-informed decisions.
Understanding the iOS Privacy Manifest
iOS Privacy Manifests are attached to the app’s binary as property list (plist) and hold information regarding the collection and usage of privacy-related data by the app and included third-party SDKs. Not providing valid information within the iOS Privacy Manifest may lead to notifications or rejection throughout the app submission process. Strict enforcement in the App Store is scheduled starting in May 2024.
Privacy Manifest Components
When it comes to populating a iOS Privacy Manifest, there are several key data structures that developers must adhere to:
Required Reason APIs: Apple introduced a class of APIs referred to as Required Reason APIs to address concerns regarding fingerprinting. Any app or SDK utilizing an API from this list must explicitly state a valid purpose for their usage.
Data Usage Categories: Developers must provide a breakdown of data usage categories in their privacy manifests. This includes specifying the data collected, whether it’s linked to end-users’ identities, its usage for tracking purposes, and a list of reasons justifying. Apple provides a predefined set of purposes that developers have to reference.
External Domain Usage: All external tracking domains used in the app or third-party SDKs must be listed in the iOS Privacy Manifest. This provides users with clear visibility into the presence of all domains utilized for tracking purposes. When tracking permission (via the App Tracking Transparency framework) is not granted by the user, network requests to these domains will be blocked by iOS.
Appicaptor: iOS Privacy Manifest as additional App Evaluation Source
Appicaptor extracts the contents of the iOS Privacy Manifest and presents them in human-readable format. But beyond that, Appicaptor correlates the manifest’s content with privacy-related findings discovered from the app binary through static analysis. Therefore, Appicaptor offers users a comprehensive understanding of how their privacy-related data is handled by the apps:
Privacy Transparency: By presenting the iOS Privacy Manifest contents in a human-readable format, Appicaptor users gain transparency into the app’s data collection practices.
Informed Decision-Making Elevated: Armed with in-depth understanding of an app’s privacy practices, Appicaptor users can make informed decisions about whether to engage with the app. Correlation of the manifest contents with static analysis findings in coordination with Appicaptor’s ability to define custom rulesets empowers companies to assess the privacy risks associated with using the app to make choices aligned with their privacy preferences and concerns.
Enhanced Privacy Insights: Appicaptor’s static analysis enables an examination of the app’s codebase, revealing privacy insights in addition to what is disclosed in the iOS Privacy Manifest. By correlating these findings, users gain a more granular understanding of how their data is collected, used, and shared by the app.
Accountability and Trust Building: Appicaptor’s evaluation reports promotes accountability and trust by ensuring alignment between stated privacy practices and actual implementation. By correlating the results, Appicaptor users can identify and address any discrepancies between the documentation and implementation of privacy-related data usage.
Conclusion
In an era where trust between users and tech companies is important, transparency regarding data collection practices is non-negotiable. Using the iOS Privacy Manifest as an additional source of evaluation, Appicaptor enhances its app evaluation capabilities and gives users a more thorough understanding of how their data is handled by various apps. This integration enforces Appicaptor’s commitment empowering users with transparent and informed decisions regarding app security.
Appicaptor’s solution contributes to the overall integrity of the digital ecosystem by promoting transparency, accountability, and responsible data usage. By empowering users with comprehensive privacy insights and encouraging developers to uphold best practices and that the implementation is inline with documentation, Appicaptor’s approach supports a culture of trust and integrity that benefits everyone involved.
The ability to switch Apple IDs on iOS devices poses a security risk for Corporate-Owned, Personally Enabled (COPE) mobile environments. When users opt to retain Keychain entries while transitioning from a corporate Apple ID to a private Apple ID, it results in the merging of passwords to the private Apple ID. This merging poses a significant challenge for organizations that need to maintain strict control over corporate credentials, as enterprise passwords can then be synced with all private devices using the private Apple ID.
Switching Apple IDs on an iOS device involves opening the “Settings” app on an iOS device, navigate to the Apple ID section and opt to sign out of the current Apple ID. This prompts a confirmation step, asking users if they wish to keep a copy of their data on the device or to delete it. This decision is crucial to consider because if the user opts to keep the Keychain data to be stored on the device, this decision will have implications for the passwords associated with the Apple ID currently signing out. Following the sign-out, users can proceed to sign in with the new Apple ID. Once the necessary information is entered and the setup is complete, the iOS device becomes associated with the new Apple ID. The device syncs with the new account, applying apps, data, and settings associated with the updated Apple ID. However, apps installed with the previous Apple ID are still usable.
Keychain Merging and Its Effects
When logging into a website using Safari, a pop-up typically appears, asking if the user likes to store the password for AutoFill. This method simplifies the process of adding accounts to the password store. Alternatively, when creating a new account, users receive an automatically generated strong password. In either case, users can add accounts and passwords in the password store. The passwords are stored in the Keychain. If the user chooses to keep the Keychain data stored on the device during the Apple ID switching, this results in a merging of the stored passwords of the former and the latter configured Apple ID on the device.
COPE environments prioritize the separation of work and personal data, but the Keychain merging while switching Apple IDs on iOS devices undermines this separation. This has to be considered in situations where employees use their personal Apple IDs alongside corporate profiles on the same device. If one Apple ID during the Apple ID switching process is the corporate ID and the other is the private ID, that leads to merging of corporate and personal credentials compromising the security posture of COPE mobile environments and/or the privacy of the user’s private credentials.
Lack of Mitigation Through MDM Restrictions
Currently, there is an absence of Mobile Device Management restrictions to mitigate the unintended consequences of password merging during Apple ID switching. The inability to prevent Keychain merging in COPE environments can have a significant impact for organizations. The only measure is to enforce employee education and awareness regarding the risks associated with Apple ID switching and Keychain merging. Training programs should emphasize the importance of choosing secure options during account transitions and the potential impact on corporate and private data security.
Conclusion
The merging of Keychain entries during Apple ID switching on iOS devices poses a significant security challenge, particularly in COPE mobile environments. Organizations currently need to be proactive in addressing this issue through employee education and awareness programs. Collaboration between organizations, MDM providers and Apple, is needed to develop comprehensive solutions that safeguard corporate credentials and mitigate the risks associated with Apple ID switching.
The number of ways to bypass iOS data flow restrictions meanwhile has further increased, but Apple still does not bother to fix them. So, the question is: How trustworthy are iOS MDM restrictions if even simple tricks to bypass them are not closed by Apple?
In many enterprises, there is the need to separate data of different origins or trust levels. In COPE (company owned, personally enabled) deployments this typically is the separation between private and business apps. Whereas in COBO (company owned, business only) deployments often data of supporting tools (such as travel management) should be separated from confidential enterprise data or other data with higher protection requirements.
Apple acknowledges this demand of controlling the data flow, “to keep it protected from both attacks and user missteps” by applying the Mobile Device Management (MDM) settings “Managed Open In” and “Managed pasteboard”. However, we have already demonstrated that it is possible to circumvent these MDM restrictions with the pre-installed iOS Shortcuts App. Others also reported issues already in 2020 with the possibility to use the “Quick Look” functionality of the native iOS email client for a bypass, also demonstrated in a video.
New Findings
The recently new discovered and reported issues are related to the iOS Files app. The first issue is related to the “Recents” Tab of the Files app. When a user had opened any document in a managed app before and now open an unmanaged document in the “Recents” tab of the Files app, the share action incorrectly presents a list of managed apps instead of unmanaged apps. The opposite direction for the bypass (managed to unmanaged app) is the second newly reported issue. Deleting a managed document with the Files app and copying it to an unmanaged app folder (instead of restoring it) removes the MDM restriction and permits the user to open the document in the unmanaged app. This is also shown in our extended demonstration video, now demonstrating five unfixed issues with the iOS data flow restrictions.
Reactions to Reported Issues
We again reported the two new issues to Apple, but also this time, Apple argued that “MDM profiles provide configuration management but do not establish additional security boundaries beyond what iOS and iPadOS have to offer.” A similar argument was also given for the reported MDM restriction bypass of SySS GmbH.
For all these six bypass issues, there are currently no mitigations available. Taking all the current conversations with Apple into account, and as reported issues of 2020 are still not fixed, there seems to be no justifiable reason to believe that our findings will be fixed sooner.
Consequences for Enterprises
MDM restrictions are a vital instrument for securing the usage of iOS devices in enterprise environments. Only thanks to these restrictions, the requirements of enterprises can be fulfilled for a device that is primarily developed for the consumer market. Even with effective MDM restrictions it is still a challenging task for enterprises to keep up with constantly changed and extended features of mobile smartphone operating systems. And often enough, MDM restrictions that would make new features acceptable for enterprises are released only major versions later, if any (e.g., still no possibility to prevent a mix-up of private and enterprise passwords when switching between private and enterprise Apple IDs in COPE deployments, discussed in an upcoming article).
Experiencing the lack of support for fixing multiple reported issues in advertised security features now raises the question: Which other MDM restrictions are insecure? When one follows Apple’s argument, for any other bypass of such MDM restrictions a patch will be rejected and probably already have been rejected before, as MDM restrictions are always just configuration options per definition. This isn’t then only a technical issue, it becomes an issue of trust in supporting enterprises with reliable solutions.
In consequence, the missing patches reduce the possibilities for a secure iOS enterprise deployment in COPE and COBO scenarios.
Apple promotes MDM restrictions that should prevent data flows between managed and unmanaged apps. These restrictions can be bypassed easily and effortlessly using an app that is pre-installed on every new iOS device: the Shortcuts App. This is a finding of our practical tests on iOS 17.1 that should concern all enterprises that rely on these restrictions to use iOS devices in a compliant way with their enterprise policy for personal enabled usage.
Apple promotes the data flow control between managed and unmanaged apps as a solution for data separation between personal and corporate data, “to keep it protected from both attacks and user missteps“. To achieve this protection, many enterprises configure the following Mobile Device Management (MDM) restrictions for the managed iOS devices:
Managed pasteboard. In iOS 15 and iPadOS 15 or later, this restriction helps control the pasting of content between managed and unmanaged destinations. When the restrictions above are enforced, pasting of content is designed to respect the Managed Open In boundary between third-party or Apple apps like Calendar, Files, Mail, and Notes. And with this restriction, apps can’t request items from the pasteboard when the content crosses the managed boundary.
Allow documents from managed sources in unmanaged destinations. Enforcing this restriction helps prevent an organization’s managed sources and accounts from opening documents in a user’s personal destinations. This restriction could prevent a confidential email attachment in your organization’s managed mail account from being opened in any of the user’s personal apps.
With the first restriction, a user should not be able to paste the content of the pasteboard to an unmanaged app, if the content was copied from a managed app, or vice versa. The second restriction applies the same restriction to documents. And in our tests, iOS prevented us from performing such actions, after the MDM restrictions for the test devices have been activated.
However, the iOS Shortcuts App, which is an integral part since iOS 13 and can be used to automate almost any task on iOS, can be abused to bypass such restrictions. The first shortcut (below on the left side) simply reads the clipboard and copies the content back to the global clipboard, which is then also synced with all other devices that the user has logged in with the same Apple account. When the user activates this shortcut, it strips of the information, that the original content was copied from a managed app and so iOS does no longer prevent the paste action to an unmanaged app. Each shortcut can also be combined with a trigger action that can automated the execution of it. So, the bypass of iOS MDM Restrictions can also be automated.
The second example shortcut uses the input of the share sheet to save the content of a document from a managed app. When using this shortcut in the share sheet, iOS prompts for name and location of the file and stores it without the information, that the content originated from a managed app. This way the document can be opened also from unmanaged apps. We created a video for both bypass methods to show the environment and the results in action. But these are only simple examples of the problem. There are many more possibilities to use actions of the Shortcuts App to bypass the iOS MDM restrictions.
Apparently, the Shortcuts App has privileges to access any data from managed apps, although it is not configured as a managed app through the MDM profile. This way the app can access the data, but when the data is saved by the Shortcuts App, it is saved as an unmanaged app. The fix therefore should be to keep the origin information of the content the Shortcuts app processes. This way the app could still access content from managed apps, but the processed content would be marked as originated from a managed app and this way the data separation would be kept intact.
We reported this flaw via Apple’s responsible disclosure process. However, the issue was rejected. We provided a detailed description, screenshots and videos, but also on a second try, where we explained again that the bypass circumvents an advertised security feature of iOS, it was not accepted. We also reached out to Apple via feedbackassistant.apple.com, but did not receive any answer in 2 months. As the current Shortcuts App version 6.1.2 is still vulnerable, we decided to publish the findings to warn enterprises that they cannot rely on iOS data flow restrictions.
A short-term solution would be to delete the iOS Shortcuts App from the devices and block the installation with the MDM blacklist via the bundle ID “com.apple.shortcuts” or to exclude devices with installed Shortcuts App from the domain as non-compliant, depending on the provided functionality of the MDM solution used. However, in the VMware Workspace ONE UEM (AirWatch) MDM tested, the Shortcuts App blacklisting did not prevent the installation nor was the device marked as non-compliant.
Without a proper blocking of the Shortcuts App, company data can currently be sent unintentionally in unmanaged apps through manually used or automated shortcuts and thus leave the company in an uncontrolled manner.
You can visit us on it-sa Expo&Congress, 10. – 12. October in Hall 6, Booth 210 for further details.