Last December, Log4Shell shortened the nights of many people in the JVM world. Worse, using the earthquake analogy caused many aftershocks after the initial quake. I immediately made the connection between Log4Shell and the Security Manager. At first, I didn’t want to write about it, but I’ve received requests to do so, and I couldn’t walk away.
+<div class='jekyll-twitter-plugin'><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Hey <a href="https://twitter.com/nicolas_frankel?ref_src=twsrc%5Etfw">@nicolas_frankel</a>, isn't the <a href="https://twitter.com/hashtag/Log4j?src=hash&ref_src=twsrc%5Etfw">#Log4j</a>-Exploit the perfect argument against deprecation of the Java SecurityManager?!</p>— Johannes Rabauer (@JohannesRabauer) <a href="https://twitter.com/JohannesRabauer/status/1471012592495865860?ref_src=twsrc%5Etfw">December 15, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
As a reminder, the Oracle team deprecated the Security Manager in Java 17. One of the arguments it based its decision on is that it was initially designed to protect against applets. Applets were downloaded from the Internet, so they had to be considered untrusted code. Hence, we had to run them in a sandbox.
Though they never said so, there’s an implicit consequence of this statement: because applets are now deprecated, we run only trusted code. Ergo, we can let go of the Security Manager. It’s plain wrong, and I’ll explain why in this post.
The premise that the code that runs inside your infrastructure can be trusted is dangerous - on-premise or in the Cloud. Let me enumerate some arguments that support this claim.
Libraries can’t be trusted
Wise developers don’t reinvent the wheel: they use existing libraries and/or frameworks.
Obviously, from a security point of view, it means users of such third-party code should carefully audit it. We should look for flaws: both bugs and vulnerabilities.
In two decades in the industry, I’ve never seen such an audit happen.
One could argue in favor of custom code. Unfortunately, it doesn’t solve anything. Custom code suffers from the same issues, bugs, and vulnerabilities. Worse, it doesn’t get the same attention as standard libraries, so researchers cannot spend their time to find these issues, which costs nothing.
Builds can’t be trusted
Imagine that you have all resources necessary to audit the code - time, money, and skills. Imagine further that the audit reveals nothing fishy. Finally, imagine that the audit’s conclusion is 100% reliable.
The issue is that nothing guarantees that the JAR is the result of the build from the source code, even if the build is public. A malicious provider could replace the genuine JAR with another one.
Identities can’t be trusted
A provider can sign a JAR to guarantee it’s genuine. The signature is based on asymmetric cryptography:
- The provider signs the JAR with its private key
- It generates a public key with the private key
- One can read the signature using the public key and check that the provider signed the JAR.
Hence, anybody can verify that a JAR comes from a specific provider.
The JDK provides the
jarsigner tool to sign JARs.
Unfortunately, most libraries don’t use it.
As an example, I’ve verified the following dependencies:
Among the twelve JARs above, only a single one is signed with
If you’re interested, it’s Eclipse Collections.
However, to counter supply-chain attacks, artifact repositories have started to require signed artifacts. For example, Sonatype requires a signature for each uploaded file, i.e., the POM, the JAR, the sources JAR, the JavaDocs JAR, etc.
One can verify the signature with Maven:
mvn org.simplify4u.plugins:pgpverify-maven-plugin:show -Dartifact=com.zaxxer:HikariCP:5.0.0
It outputs the following:
Artifact: groupId: com.zaxxer artifactId: HikariCP type: jar version: 5.0.0 PGP signature: version: 4 algorithm: SHA256withRSA keyId: 0x4CC08E7F47C3EC76 create date: Wed Jul 14 04:49:52 CEST 2021 status: valid PGP key: version: 4 algorithm: RSA (Encrypt or Sign) bits: 2048 fingerprint: 0xF3A90E6B10E809F851AB4FC54CC08E7F47C3EC76 create date: Wed Sep 18 02:51:23 CEST 2013 uids: [Brett Wooldridge (Sonatype) <[email protected]>]
However, none of this amounts to much. Signing doesn’t assert the identity of the provider. It tells that a private key with the referenced email signed it with a private key with the referenced email. Nothing prevents a malicious actor from creating another private key with the same email or a similar one.
Features can’t be trusted
At this point, I think the picture looks pretty gloomy. But it’s even worse than that. None of the above explains the Log4J vulnerability. The core reason is that it provides features that most developers neither need nor use.
I don’t want to delve into too much detail, as it already has been explained in many places.
Suffice to say that Log4J provides lookups.
A lookup is an integration with another system, which allows enriching the log beyond the mere message.
For example, the Spring Boot lookup allows getting Spring Boot properties.
It makes sense to enrich the log, for example, with
In all available lookups, some seem a bit fishy. For example, environment variables, system properties, or even JNDI. It’s the latter that is the root cause of the Log4J vulnerability.
This kind of hidden features is not specific to Log4J. I happen to know there’s a Swing-based GUI administration application inside the H2 database driver. I learned about it just by chance.
The problem is that developers use a library for their core capability, e.g., logging. If one stops at that, one will never know all the library’s capabilities. Hence, one will be surprised when the library does something it was not assumed to do, e.g., read from a remote JNDI resource tree.
The JVM can’t be trusted
I admit the section’s title is misleading, but I couldn’t find a good one following the series. It’s a follow-up to the previous section, this time applied to the JVM.
The JVM provides tons of features, of which you use a handful or two. The most blatant problem is the Attach API. This API, available since Java 1.6, allows a JVM to update the bytecode already loaded into another JVM. Yes, you read it correctly: you can change the bytecode of an application that’s running. Worse, if you restart the JVM, the code will be loaded again, leaving no trace.
It’s a cool feature if you want to quickly monkey-patch a fix in production. However:
- Most people don’t use it
- Most people don’t know about it
- The feature needs to be explicitly disabled. It’s on by default.
May I suggest that the first thing you do tomorrow is to check your infrastructure and disable it?
The Security Manager could be trusted
I hope that at this point, you understand the problem. A lot of code that you’re running can’t be trusted. Worse, I’m only considering regular applications: software built on a plugin architecture run untrusted code by definition.
The Security Manager was a JVM component that allowed you to define a white list of what an application could do, regardless of the application code. It solved all the above issues: you could run any code but only allowed it to do a limited number of things.
The Security Manager came with several drawbacks; chief amongst them is that it was a bore to configure permissions. However, there are tools to generate the policy file. Since they are automated, you need to review the discovered permissions carefully. It’s easier to read through ~500 lines of configuration than 10k or 100k lines of code.
Since many didn’t know about tools, few did use the Security Manager. But when it was, it was very beneficial. To prove my claim, you can read this post or jump to the conclusion: though Elasticsearch embeds a vulnerable Log4J version, it’s not susceptible to Log4Shell!
Security is a Non-Functional Requirement.
NFRs don’t bring any competitive advantage and cost money.
In short, they divert the budget from business requirements to
That’s at least how most business departments see it.
I think we should handle security through the lenses of risk assessment. It requires first to list all possible risks. I’m afraid the deprecation of the Security Manager just added several lines to that risk, all linked to running untrusted code.
The debate regarding the depreciation of the Security Manager has not been a civil one. Since I took side against the depreciation, I’ve been publicly attacked, even to the point of plain bullying. Others voices that backed me up received similar treatment.
I don’t expect reactions to this post to be any different. However, I have to tell community members what happened and what we lost.
Thanks to Peter Firmstone and Geertjan Wielenga for their help in reviewing this post.