Content Security Policy – Important Defense-in-Depth missing for many Apps

The Content Security Policy (CSP) defines restrictions for webviews to reduce the attack surface of applications for Cross Site Scripting (XSS) and other attacks. The stricter the policy is configured, the fewer possibilities remain for attackers to inject malicious functionality in case of input validation flaws. This is especially important for hybrid apps, such those build with Cordova, that use JavaScript to access granted OS functionality via a JavaScript bridge. If an attacker can inject own code to a hybrid app’s webview, he can alter the way how the app uses accessible data and sensors.

A strict CSP can prevent the execution of injected functionality by restricting the executable code fragments to resources that are less accessible to an attacker. However, strict restrictions also require alternative programming styles during app development, which sometimes are inconvenient or unknown to the developer. Additionally, some external libraries or other existing code may not work out of the box with restrictive CSP settings, increasing the pressure for using a less restrictive policy.

With Appicaptor we inspect apps for configured CSPs to evaluate the restrictions for the security of the app. A single missing sanitation of user input may render a hybrid app vulnerable for hijacking of the app’s functionality in a malicious way. As the vulnerabilities can be introduced to apps by a missing ‘S’ for the HTTPS scheme when loading external JavaScript resources or by a flawed sanitation of user input, hybrid apps should reduce these risks by applying a ready to use second line of defense using CSPs. However, 60.5% of the analyzed hybrid apps for iOS and 72.3% of the hybrid apps for Android do not define a CSP. The analyzed app set each consists of the most used 2,000 apps as ranked by the Android and iOS App Stores.

One might consider a ratio of more than 25% for hybrid apps that do use CSP as a second line of defense an already good starting point. However, analyzing the actual CSPs of these apps shows that many policies are weak. In iOS 39.5% (Android: 27.6%) of the hybrid apps with a CSP deactivate important protective restrictions and therefore could be rated equally to apps with no defined CSP from the perspective of an attacker. So let’s see why.

This is a typical basic example of an observed CSP:

<meta http-equiv="Content-Security-Policy" content="default-src 'self' data: 'unsafe-inline'; img-src *;"> 

CSPs are constructed in a simple fashion: Each section starts with the source keyword the directive should be applied to and is terminated with a semicolon. In the example the CSP starts a directive section with the default-src keyword. Its declared restrictions are used as fallback for other, not defined directives. For example, the directive for script-src and object-src are not declared, so the configuration of 'unsafe-inline' is applied to both of them, but not to 'img-src', as it is declared in the example and therefore the fallback to 'default-src' is not applied by the browser.

The intention here is to allow loading sources only from 'self', which is the source where the HTML page with the CSP entry was loaded from, to prevent injection of external scripts for Cross Site Scripting. Only images are allowed to be loaded from any source, declared by star character in the img-src directive.

However, by declaring the keyword 'unsafe-inline', this CSP allows executing JavaScript code injected at any place in an HTML page, such as

User entered <script>doSomethingBad();</script> text 

or

<div onclick="doSomethingBad();">Click Me</div>

Allowing 'unsafe-inline' together with the data: scheme for the object-src in the example allows an attacker to inject script code to <object>, <embed>, and <applet> elements:

<object data="data:text/html,<script>doSomethingBad();</script>"></object>

or by using JavaScript inside an SVG, embed as data: URL:

<embed src="data:image/svg+xml,<svg onload='doSomethingBad();' xmlns='http://www.w3.org/2000/svg'></svg>" type="image/svg+xml" width="1" height="1" /> 

Fortunately, in most common cases the code injected via data: scheme is treated as a separate frame. It cannot interact with the JavaScript bridge of a hybrid app, which is located in the parent index.html page and the Same Origin Policy prevents the access to it. However, attackers can still use this for UI manipulation attacks by tricking users to disclose credentials or other critical data in crafted dialogs.

Besides many other possible shortcomings observed CSP are having regarding inline XSS protection, about 10% of the analyzed hybrid apps (iOS / Android) do not properly restrict the sources for loading scripts. This is caused by using the star character as a wildcard for the scheme. This way an attacker would be able to inject a script element that can load malicious code from any domain. The same applies if instead the scheme https: is used. It seems that this configuration option is too confusing for developers as for some it might look like as if this configuration would just prevent HTTP access. However, if used it allows access to any domain via HTTPS. So, when some developers use the https: together with a list of domain names they want to allow, this domain list does not have any effect, as any domains are already allowed:

default-src 'self' data: https: ssl.gstatic.com maps.google.com;

Such ineffective domain restrictions were observed in about 7% (iOS / Android) of the analyzed hybrid apps, which might give a false sense of security. Instead, the developer would need to specify the protocol together with the scheme (e.g. https://ssl.gstatic.com) to restrict access to listed domains and allow this access only via HTTPS.

Unfortunatly, in about 5% (iOS / Android) of the analyzed hybrid apps also parsing errors were detected that can prevent the intended protection. In general, such parsing errors could lead to more strict or less strict restrictions, depending on which part of the CSP is affected. However, as more strict policies that are accidentally created are more likely to cause functional issues than in the case of accidentally created less strict policies, the chance is much higher that those issues are detected in functional testing. For example, we observed parsing errors that prevent some or all directives from being applied. In all these cases, the errors lead to a less strict CSP, e.g. by omitting a good strict default-src directive that is now missing for a secure fallback for other non-specified directives.

CSPs can have a strong impact on app security. They are especially important for hybrid apps. The analysis results show that it is important to check if apps do use a CSP and that the CSP needs to be evaluated carefully. In case of doubt, developers should check the CSP with a free tool such as the Google CSP Evaluator to better understand the impact of the directives and to prevent parsing flaws.

Concerns about Apache Log4j in Android Apps

The recently published CVEs for the Apache Log4j Java logging library raise the question if also Android apps suffer from the same fatal exploitability, like the huge number of server and desktop applications.

In a first response, we checked the presence of Apache Log4j in the Appicaptor monitored 2,000 most popular Android apps and in the set of apps scanned by our Appicaptor customers to detect which Log4j versions are contained. Our analysis shows that currently less than 1% of the apps contained Log4j 1.x and none of them contained Log4j 2.x.

Regarding the current JNDI CVE-2021-4104, CVE-2021-44228, CVE-2021-45046 and CVE-2021-44832 a further positive news is: Our manual tests proved that the required classes for a JNDI lookup are not available in Android. So, even if an app contains a Log4j version that is vulnerable to JNDI lookups and an attacker manages to trigger a malicious JNDI lookup, the app would not be able to perform the JNDI lookup and -at most- could only crash with a ClassNotFoundException. Therefore, no remote code execution in possible in this case.

An older vulnerability, tracked by CVE-2019-17571, is related to a server socket for receiving log messages. The SocketServer class in Log4j 1.x is vulnerable to deserialization of untrusted data, which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget while listening to untrusted network traffic for log data. The additionally tracked CVE-2020-9488 for Log4j 1.x describes an improper validation of certificates with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. However, usage of both aforementioned functionality is uncommon in released mobile apps so that the overall risk of an exploitation is considered low.

Nevertheless, as Log4j 1.x is not maintained anymore, it should be replaced with an alternative and actively maintained logging library that does not contain known vulnerabilities.

Known TwitterKit Vulnerability, still an Issue?

Nearly 18 months ago we published a vulnerability in our Appicaptor blog that the current Twitter Kit framework for iOS does not properly validate the TLS certificate. This vulnerability can be utilized for man-in-the-middle-attacks.

Twitter stated that there will be no fixed version of the vulnerable library, because it is seen as deprecated and no longer supported. Therefore, developers should have taken action and have removed the TwitterKit from their apps.

Every month Appicaptor evaluates the IT security quality of thousands of Android and iOS apps. So one might ask: Does the German Top 2000 iOS Apps still utilize the vulnerable Twitter Kit library version and is there a trend visible? The following chart depicts for each month the number of apps that are prone to the vulnerability within the 2,000 most popular apps in iOS App Store.

It is visible in the chart that the amount of apps with the vulnerable library quite constantly and slowly decreases throughout the year, reducing the risks for users. Although there is quite some fluctuation on the considered top list, a clear trend is visible, even if it took much longer than thought. But nevertheless, from our point of view, none of the most popular apps should have such a massive vulnerability. Therefore we would like to advise again that app developers should switch to alternative APIs.

iOS: 40% Critical Log Statement Usage in Apps

Developers commonly need log statements in their apps to track down problems by printing out information about the current program state. However, this can also lead to serious information disclosure to third parties, as many developers still use the old NSLog statement in about 40% of the Top 2000 German iOS apps. Many developer sites state, that information logged with NSLog will not be persisted on the device and therefore the usage is not critical. However, that’s not correct as we will demonstrate in the following for current iOS devices, impacting user’s privacy.

Over the years, Apple has changed a lot under the hood of iOS. Likewise the logging mechanism has changed in multiple aspects. One major change was the introduction of unified logging with iOS 10, which provides log levels, information hiding for sensitive entries and many other configuration capabilities. However, these new feature are only usable if the new os_log macro is used.

When using NSLog, the log messages are stored with default log level persistently for a certain time, which was tested with iOS 12.4, iOS 13.3 and 13.4 on non-jailbroken devices. Depending on the usage intensity of the iOS device, the stored log messages can go back days or months.

Log entries on these iOS devices are stored system-wide in the directory /var/db/diagnostics/Persist in files of the tracev3 binary format, which can be made readable again e.g. with the OSX log tool or platform independent with UnifiedLogReader. The stored database files are protected by iOS DataProtection class NSFileProtectionCompleteUntilFirstUserAuthentication with the device passcode until the first user’s logon and can only be read by the administrative user root.

This means, in a lost-device scenario for an iOS device without passcode, these logging outputs can be read directly via USB using the iOS sysdiagnose function. If a passcode is set, the passcode is required to read the logging outputs.

However, since users are often asked to send the sysdiagnose data to Apple or App developers (see instructions e.g: https://faq.pdfviewer.io/en/articles/1458505-ios-how-to-send-a-sysdiagnose), in this case third parties will get access to log messages of all utilized apps (within the persisted time frame).

Among many other debug information, the transmitted file sysdiagnose_[date]_iPhone_OS_[device].tar.gz contains the file system_logs.logarchive. It is compressed and needs to be converted first to make use of it. This can be done quite easy on OSX.

content of iOS sysdiagnose file, containing information stored by nslog statements

The file system_logs.logarchive can be viewed on OSX with the log command:

log show system_logs.logarchive --info --debug > logs.txt

To use the UnifiedLogReader instead, one first has to extract the files from the system_logs.logarchive to a folder and start the python script inside this folder like this, e.g. on Windows systems:

python [path_to]\UnifiedLogReader.py .\ .\timesync\ .\Persist [output_folder]

In the output file logs.txt, one can search for the app binary name to find the related log messages. In the following example the binary is called TestApp:

2020-03-25 15:18:19.707346+0100 0x517db  Default  0x0  737 0 TestApp: My NSLog example GPS: {"geoData": {"latitude": 49.xxx, "longitude": 8.xxx, "radius": 2.818104}, "filters": {}, "exclude": []}
2020-03-25 15:18:25.635420+0100 0x517db  Default  0x0  737 0 TestApp: My NSLog example GPS: {"geoData": {"latitude": 49.xxx, "longitude": 8.xxx, "radius": 5.515101}, "filters": {}, "exclude": []}
2020-03-25 15:18:28.657474+0100 0x517db  Default  0x0  737 0 TestApp: My NSLog example GPS: {"geoData": {"latitude": 49.xxx, "longitude": 8.xxx, "radius": 10.625031}, "filters": {}, "exclude": []} 

In these log messages, we often find GPS-positions along with email addresses, generated encryption keys, full credit card information and much more entered user content. Even for apps that primarily do not store sensitive data, the log can also reveal sensitive information such as sensitive app names, their installation dates and how and what was used inside the apps.   

From a user’s perspective, it should now be clear:

  • Do not send sysdiagnose data to anyone!
  • Deny access for Apple (see https://support.apple.com/en-us/HT202100), however, this does not disable the logging nor does this disable the possibility to create sysdiagose files.
  • Use a good passcode to make it harder to access these files unauthorized. 

But the best protection is to use apps, that don’t store sensitive data to logfiles!

So, make a test for yourself and inspect your sysdiagonse file to learn more about your apps and the things they store.

Developers should take a look at iOS Unified Logging with the os_log macro. It can be used to programmatically enable a persistent storage only for cases when needed for remote debugging (if that’s necessary at all). For all other cases it can be configured to use only console output, preventing a data leakage via persitent log files.

Vulnerable Library Warning: TwitterKit for iOS

The Twitter Kit framework through 3.4.2 for iOS does not properly validate the TLS certificate for api.twitter.com. That’s a finding found with our static binary analysis and reported to Twitter.

There will be no fixed version, as this library is no longer supported by Twitter, but this vulnerable library was still found in many apps. It is urgently advised that their app developers switch to alternative APIs.

Such issues are likewise common, which illustrates the need to check for vulnerable or outdated 3rd party code.

What makes this issue a bit special is the way the developers broke the validation of TLS certificates. Apparently, they wanted to increase the security by implementing a public key pinning of trusted root certificate authorities (CA), such as VeriSign, DigiCert and GeoTrust. So they created the following array with entries of 21 public key hashes for the CAs:

"1a21b4952b6293ce18b365ec9c0e934cb381e6d4",  // "VERISIGN_CLASS3_G2"
"2343d148a255899b947d461a797ec04cfed170b7",  // "VERISIGN_CLASS1"
"5519b278acb281d7eda7abc18399c3bb690424b5",  // "VERISIGN_CLASS1_G3"
"1237ba4517eead2926fdc1cdfebeedf2ded9145c",  // "VERISIGN_CLASS2_G2"   
"5abec575dcaef3b08e271943fc7f250c3df661e3",  // "VERISIGN_CLASS2_G3"    
"22f19e2ec6eaccfc5d2346f4c2e8f6c554dd5e07",  // "VERISIGN_CLASS3_G3"
"ed663135d31bd4eca614c429e319069f94c12650",  // "VERISIGN_CLASS3_G4"
"b181081a19a4c0941ffae89528c124c99b34acc7",  // "VERISIGN_CLASS3_G5"
"3c03436868951cf3692ab8b426daba8fe922e5bd",  // "VERISIGN_CLASS4_G3"
"bbc23e290bb328771dad3ea24dbdf423bd06b03d",  // "VERISIGN_UNIVERSAL"
"c07a98688d89fbab05640c117daa7d65b8cacc4e",  // "GEOTRUST_GLOBAL"
"713836f2023153472b6eba6546a9101558200509",  // "GEOTRUST_GLOBAL2"
"b01989e7effb4aafcb148f58463976224150e1ba",  // "GEOTRUST_PRIMARY"
"bdbea71bab7157f9e475d954d2b727801a822682",  // "GEOTRUST_PRIMARY_G2"
"9ca98d00af740ddd8180d21345a58b8f2e9438d6",  // "GEOTRUST_PRIMARY_G3"
"87e85b6353c623a3128cb0ffbbf551fe59800e22",  // "GEOTRUST_UNIVERAL"
"5e4f538685dd4f9eca5fdc0d456f7d51b1dc9b7b",  // "GEOTRUST_UNIVERSAL2"
"d52e13c1abe349dae8b49594ef7c3843606466bd",  // "DIGICERT_GLOBAL_ROOT"
"83317e62854253d6d7783190ec919056e991b9e3",  // "DIGICERT_EV_ROOT"
"68330e61358521592983a3c8d2d2e1406e7ab3c1",  // "DIGICERT_ASSUREDID_ROOT"
"56fef3c2147d4ed38837fdbd3052387201e5778d",  // "TWITTER1"

On every new connection, TwitterKit for iOS checks in method evaluateServerTrust whether the received certificate chain contains a certificate with a fitting public key of the list above. This way, certificates for api.twitter.com issued by possibly untrusted CAs should be blocked. However, this approach lacks a very important verification: The domain name of the leaf certificate is not verified by iOS, as TwitterKit for iOS implemented an own delegate method for its public key pinning functionality. In this case, iOS only verifies, that the certificate chain is valid regarding signatures. All other checks have to be performed by the delegate method, to provide the flexibility for alternative verification methods. A simple fix would have been additionally calling the iOS method secTrustEvaluate and utilize its result value to reject certificates for other domains.

Because of the missing domain name verification, any valid certificate chain containing a certificate with a public key hash of that list is accepted by the app. An attacker with a valid certificate for his own domain, issued by one of these CAs, can use this certificate for man-in-the-middle-attacks against apps communicating via the Twitter Kit for iOS with api.twitter.com. As the implementation does not check the position inside the chain, the matching public key could also be in the middle of the chain, such as in case of an intermediate certificate.

We used a matching legitimate certificate, issued by DigiCert for a domain under our control to verify the impact of the vulnerability. So we redirected the traffic for api.twitter.com to our server, that answers the request with our own certificate. The received content is logged and transmitted to the ‘real’ Twitter servers. The server’s response is also logged and transmitted back to the app. However, as the login process for Twitter involves a WebView, which does not use the vulnerable pinning functionality, it would not accept our certificate. As the WebView loads its content via the same domain name, we had to distinguish TLS connections based on differences in ALPN extension of the TLS Client Hello and route WebView connections without interception, to create a fully working proof of concept attack.

During the Login with Twitter process, our man-in-the-middle proxy recorded for a vulnerable news app the OAuth 1.0 oauth_token_secret together with the authorized oauh_token. This enabled us to fully use the provided Twitter API with these long term secrets. Attackers could perform actions like changing content of the profile, creating fake tweets and direct messages or abusing the account to push tweets via fake likes. It would also be possible to read private direct messages sent or received within the last 30 days. We could not retrieve the password nor set it to a known value, so an attacker could not use the vulnerability to lock out a user from his account. However, by changing the Twitter password a victim would also not be able to invalidate the sniffed OAuth tokens. For this it is required to revoke the app’s authorization within the Twitter settings.

Further, on every app start the vulnerable app checks the validity of the Twitter account by invoking the Twitter API account/verify_credentials.json. In case the credentials are valid, the response contains detailed information about the victims Twitter profile, such as ID, name, location and last activities. As the response can be read in our attack scenario, the information can be used to collect information about the victim to track him or dynamically create targeted phishing attacks.

We will demonstrate the vulnerability and its detection at it-sa 2019 fair in Nuremberg, Germany at Hall 9, Booth 234.