David Hopwood <[email protected]>
David Hopwood Network Security
ActiveX and Java have both been the subject of press reports describing security bugs in their implementations, but there has been less consideration of the security impact of their different designs. This paper asks the questions: "Would ActiveX or Java be secure if all implementation bugs were fixed?", and if not, "How difficult are the remaining problems to overcome?".
The latest copy of this paper is available at
http://www.users.zetnet.co.uk/hopwood/papers/compsec97.html
It will be updated to include changes in the Java and ActiveX security models since early October 1997.
Java and ActiveX both involve downloading and running code from a world-wide-web site, and therefore the possibility of this code performing a security attack on the user's machine.
Downloading and running an executable file can also of course be done manually. The difference is that reading web pages happens much more frequently, and there is a perception (rightly so) on the part of users that it is a low risk activity. Users expect to be able to safely read the pages of complete strangers or of business competitors, for example. Also, some combined browser and e-mail clients treat HTML e-mail in the same way as a web page, including any code that it references.
In this paper we will use the term "control" for any downloadable piece of code that is run automatically from an HTML page, but is not a script included in the text of the page itself. This includes ActiveX controls, and Java applets. To determine who can carry out an attack, we need to consider who is able to choose which control is downloaded (taking any modifications of the code as a choice of a new control):
Note that neither Java nor ActiveX prevents an HTML page and the control it refers to from being on different sites.
There are two basic mechanisms that can be used to limit the risk to the user:
ActiveX uses only the first approach, to determine whether or not each control is to be run. Java (as currently implemented in Netscape Communicator and HotJava) always uses verification, and optionally also uses authentication to allow the user to determine whether to grant additional privileges.
The consequences of a successful attack generally fall into the following categories:
Many companies rely exclusively on a firewall to prevent attacks from the Internet. In a large proportion of business network configurations, a firewall is the only line of defence against intruders, with security on the internal network being relatively lax. Any means of bypassing the firewall (that is, for a control to make direct socket or URL connections to internal machines) is therefore a serious problem.
Note that if a company has a policy of disallowing all controls and scripting completely, this policy is extremely difficult, and perhaps impossible, to enforce using the firewall itself.
Firewalls that claim to be able to filter controls attempt to do so by stripping the HTML tags associated with Java, ActiveX, and scripting (APPLET, OBJECT, and SCRIPT). However, this will only work reliably if the firewall's HTML parser behaves in exactly the same way as the browser's parser. Any means of encoding the HTML in a way that is not recognised by the firewall, constructing it on the fly, or copying it to a local file, can be used to bypass this filtering. Also, all protocols need to be considered (HTTP, HTTPS, FTP, NNTP, gopher, e-mail including attachments, etc.).
Therefore, if there is a policy to ensure that controls are disabled, this should always be set in each browser's security options, on each machine.
There are obvious privacy and confidentiality problems with being able to read any file on the user's machine. In addition, some operating systems have configuration files that contain information critical to security (for example, /etc/passwd on a Unix system without shadow password support). In these cases the ability to read arbitrary files can lead fairly directly to a more serious attack on the system or internal network.
If it is possible to write files in arbitrary directories on a user's system, then it is easy to use this to run arbitrary code (for example, the code can be added to a "trusted" directory, such as one specified in Java's CLASSPATH environment variable). The types of attack that are possible are limited only by what the user's computer can do. For instance, the Chaos Computer Club demonstrated an ActiveX control that checks whether the "Quicken" financial application is installed, and if so, adds an entry to the outgoing payments queue.
The approach currently taken by both Java and ActiveX to authenticating code, is to sign it using a digital signature scheme. Digital signatures use public-key cryptography; each signer has a private key, and there is a corresponding public key that can be used to verify signatures by that signer.
Assuming that the digital signature algorithm is secure and is used correctly, it prevents anyone but the owner of a private key from signing a piece of data or code. There is a convention that signing code implies taking responsibility for its actions.
However, signing is not sufficient on its own to guarantee that the user will not be misled. In most normal uses of signed controls, there are only two mutually untrusting parties involved: the end-user, and the signer of the control. Attacks on the user's system performed by a third party, i.e. not the signer, will be called "third party attacks". Both ActiveX and signed Java are vulnerable to third party attacks to some extent.
For example, neither Java nor ActiveX currently authenticates the web page containing the control. This means that if the connection to the web site is insecure, a signed control can be replaced with:
In the first two cases, the user may associate the control with its surroundings, rather than with its signer, and may trust it with information that would not otherwise have been given. The third case means that an attacker can choose an earlier version of the code that has known exploitable bugs, even when those bugs have been fixed in the current version.
Signing also does not prevent a signed control from appearing in an unexpected context where it was not intended to be used. A case study of this is given later, where an ActiveX control written for use only in intranets could be used on the Internet, as part of a security attack.
The name "ActiveX" is sometimes used as a synonym for COM (Component Object Model), and sometimes as a general term for Microsoft's component strategy. In the context of this paper, however, "ActiveX" specifically means the technology that downloads and runs controls in one of the formats supported by the "Authenticode" code signing system. This corresponds to controls that can be declared from a web page using an OBJECT tag, and currently includes:
These controls are all treated in a very similar way by web-enabled ActiveX container applications, including use of the same caching and versioning mechanism.
Java signed using Authenticode has the same security model as ActiveX (that is, applets are given full privileges on the client machine). The security risks are therefore similar to ActiveX. This paper does not consider the integration between Java and COM in Microsoft's virtual machine, and whether this integration has its own design flaws.
ActiveX defines a way to mark controls that take data from their environment, in an attempt to prevent trusted controls from being exploited by untrusted code. Each control can optionally be marked as "safe for scripting", which means that it is intended to be safe to make arbitrary calls to the control from a scripting language. It can also optionally be marked as "safe for initialisation", which means that it is intended to be safe to specify arbitrary parameters when the control is initialised. These markings reflect the opinion of the control's author, which may be incorrect.
IntraApp is an ActiveX control written by a small independent software company, and signed by its author using a Verisign Individual Software Publisher's certificate. This control had a fully functional demonstration version available on Microsoft's "ActiveX gallery" for several months. As its name suggests, it is intended to be used on intranets, rather than the Internet.
The purpose of this control is to allow the user to run arbitrary programs on the client machine, by selecting an icon on a web page, and clicking a "Run" button. The list of programs that can be run is stored in a configuration file, which is specified as an URL in a parameter to the control, i.e. in the HTML tag that references it. In fact, the whole control is highly configurable; the icons, the caption for each program, and the caption on the "Run" button are set using the same configuration file.
As mentioned earlier, ActiveX does not attempt to authenticate the web page on which a control is placed. It is very easy to implement a third party attack using IntraApp, by writing a configuration file which displays a harmless-looking icon and captions, and runs a batch file or other program supplied by the attacker when the "Run" button is clicked.
The IntraApp control is tagged as "safe for initialisation". That is, it is possible to specify its parameters on the web page that calls it, without the user being warned. At least one version was also marked as safe for scripting, although this is not needed to use the control maliciously.
I contacted IntraApp's author in private e-mail, and established that:
The IntraApp control is insecure despite working exactly as designed. Controls may also be insecure because they have bugs that can be exploited by an attacker. For example, the languages most often used to write controls are C and C++. A common type of programming error for programs written in these languages, is to copy a variable-length string into a fixed-length array that is too short (a "buffer overflow" bug). Many security attacks against network servers, and privileged Unix programs have exploited this type of error in the past (the most famous example being the Internet Worm of 1988).
Several of the controls displayed in the ActiveX gallery (signed by well-respected companies, including Microsoft) had overflow bugs that caused them to crash when passed long parameter strings. This does not in itself mean that the controls are exploitable, but it indicates that they were programmed without particular attention to avoiding overflow. It is likely that this also means that more complicated security issues have also not been addressed, since overflow bugs are among the simplest security bugs to correct. At the time of writing of this paper, a more extensive search for exploitable controls has not been done.
How significant this type of attack is to the security of ActiveX depends on other factors. For example:
Unfortunately, in the case of ActiveX the answers to these questions are about as bad as they could be:
The combined effect of these answers is to magnify the seriousness of simple mistakes by control writers. Unlike a browser implementation bug, where there is always an opportunity to fix the browser in its next version, there is very little that anyone (the browser vendor, the writer or signer of the control, the certification authority, or the end-user) can do about a control that is being exploited.
Internet Explorer 4.0 includes a change from version 3.0, that attempts to allow different security options to be set for each of four "Zones": Intranet, Trusted Sites, Internet, and Restricted Sites.
The implementation of this feature in the release version is insecure; see
http://www.users.zetnet.co.uk/hopwood/activex/ie4/
More significant as a design problem, is that the options that control which URLs are assigned to each zone are based on flawed criteria.
For example, the default security settings include UNC pathnames in the Intranet zone. UNC pathnames are paths beginning with the string "\\", that specify a computer name using the Windows networking protocols, e.g. Server Message Block (SMB). For an intranet that uses Windows networking, the set of all UNC paths is quite likely to include directories in which files can be placed by an attacker (cache and temporary directories, for instance). The Intranet and Internet zones may effectively be equivalent because of this.
Note that for an intranet that does not use Windows networking, the option to include UNC pathnames is not useful in any case.
The Intranet and Internet zones have the same default security setting ("Medium") by default. If the user sets security for the Intranet zone to be more lax than the Internet zone, without disabling the option to include UNC pathnames, this is likely to only give a false sense of security.
"Java" is the name of a programming language, a virtual machine designed to run that language (also called the "JVM"), and a set of APIs and libraries. The libraries are written in a combination of Java and other languages, for example C and C++.
The language is object-oriented, with all code defined as part of a class. When it is implemented using a JVM, these classes are dynamically loaded as modules of code that can be separately compiled. Classes are stored and represented as a sequence of bytes in a standard format, called the classfile format. (They need not be stored in files as such - it is possible to create and load classfiles on the fly, for example by downloading them from a network.)
Java's security model is based on several layers of verification:
The security of this scheme does not depend on the trustworthiness of the compiler that produced the classfiles (or on whether the code was compiled from source in the Java language, or from another language). The compiler for the standard API libraries must be trustworthy, but this can be ensured because the standard libraries are provided by the JVM implementor.
The above scheme is complicated, however, and quite difficult to implement correctly. The presence of several layers increases the potential for error; a flaw in any layer may cause the whole system to collapse. This is offset against the increased efficiency over a fully interpreted language implementation where all checking is done at run-time (such as the current implementations of JavaScript and VBScript, or of Safe-Tcl and Safe-Perl).
The JAR file format is a convention for using PKWARE's ZIP format to store Java classes and resources that may be signed. All JAR files are ZIP files, containing a standard directory called "/META-INF/". The META-INF directory includes a "manifest file", with name "MANIFEST.MF", that stores additional property information about each file (this avoids having to change the format of the files themselves). It also contains "signature files", with filetype ".SF", that specify a subset of files to be signed by a given principal, and detached signatures for the .SF files.
JAR is a highly general format, that allows different subsets of the contained files to be signed by different principals. These sets may overlap; for example class A may be signed by Alice, class B by Bob, and class C by both Alice and Bob. The author of this paper was partly responsible for defining the JAR signing format, and in retrospect, generality was perhaps too high on the list of design priorities. In practice, the current tools for signing JARs only permit all files to be signed by a single principal, since that is the most useful case. On the other hand, the extra generality is available for use by an attacker. For example, it is possible to add unsigned classes to a JAR, and attempt to use them to exploit the signed classes in order to break security.
Whether an attack of this form succeeds depends on how careful the signed class writer was in making sure that his/her code is not exploitable. However, if a large number of signed controls are produced, it would be unrealistic to assume that none of them have exploitable bugs. An attacker could look at many controls, with the help of either the original source, if available, or decompiled source. Since it is common for Java code to rely on package access restrictions for its security, a possible approach for the attacker would be to create a new, unsigned class in the same package as the trusted classes.
Netscape Communicator 4.0 has defined several extensions to the Java security model, allowing fine-grained control over privileges, in addition to the "sandbox" model. These are similar in intent to proposals for part of the core Java 1.2 specification, but at the time of writing Netscape provided a more comprehensive implementation.
Netscape's extensions provide a "capability-based" security model. A capability is an object that represents permission for a principal to perform a particular action. It specifies the object to be controlled (for example, a file, printer, access to a host, or use of a particular API), and which operation(s) should be granted or denied for that object. In Netscape's design, capabilities are called "targets". It is possible to specify a target that combines several other targets; this is referred to as a "macro target".
It is instructive to compare capabilities with a security mechanism that may be more familiar to many readers: Access Control Lists, or ACLs. ACLs are used by many multi-user operating systems, including Windows NT, VMS, and as an option in some varieties of Unix. An ACL defines permissions by storing, for various targets, the principals allowed to access that target.
Capabilities differ from ACLs in that they are assigned dynamically, rather than being specified in advance. If a permission is not granted in an ACL-based system, the user has to change the permissions manually, then retry the operation in order to continue. In practice this means that ACLs are often defined with looser permissions than actually necessary. A capability-based system can avoid this problem, by asking the user whether a request should be allowed before continuing the operation.
The current version of Netscape only supports course-grained privileges, although the architecture is designed to support fine-grained control, and much of the code needed to implement this is already present.
An alternative approach to signing for authenticating controls, would be to secure the connection between the web site and the browser, using a transport protocol such as SSL 3.0 (or secure IP) that ensures the integrity of the transmitted information. The site certificate would be shown when a control runs or requests additional privileges. This would have several advantages over code signing:
Some of these points require further explanation:
If there are no restrictions on communication between controls from different sources, then it is possible for an untrusted control to call or pass data to a trusted control. This might cause it to break security, or do unexpected things that could mislead the user. ActiveX attempts to address this by defining flags such as "safe for initialisation" and "safe for scripting", as described earlier. However there is no way to verify that a control is actually safe to initialise or script, and expecting the control author to specify this seems rather unreliable (as demonstrated by the IntraApp example).
Suppose instead that all controls on a page are authenticated, together with the connecting HTML, using SSL 3.0. In this case the attacker cannot replace any part of the page or the controls on it, without the user being alerted and the SSL session aborted. He or she can use controls on the page in another context, but this is not a problem because the authentication is only valid for each connection. For example, if the attacker has an HTTPS server, the user would see the attacker's certificate, not the certificate of the server from which the controls originated.
Using SSL, or some other secure transport instead of (and not as well as) signing would therefore solve some difficult problems with the current ActiveX and Java security models. It would be possible to have a transition period in which signing was still supported, if removing it immediately is considered too drastic.
There are some disadvantages to requiring a secure transport (note that these only apply to "privileged" controls, that is, all ActiveX controls, and Java applets that would currently need to be signed):
The last disadvantage can be solved by specifying that privileged local controls must be stored in directories that are marked in some way as trusted, and that would not be writable by an attacker.
The answer appears to be a definite no, for both technologies. In the case of Java, there are problems with the JAR signing format that make third party attacks easier than they should be. Netscape's capabilities API helps to limit the effect of this, however, by making sure that the user sees security dialogues describing exactly what each control will be allowed to do.
In the case of ActiveX, the problem of third party attacks is more serious, because there are no trust boundaries in the same sense as for Java. ActiveX controls either have full permissions or do not run at all. The example of the IntraApp control shows that it is not sufficient to rely on code signing alone to provide security.
Authenticating web pages that contain controls using SSL, instead of the current mechanisms, would go a long way toward fixing the attacks described in this paper. While abandoning the current code signing mechanisms is a drastic step, it may be necessary to prevent a potentially large number of cases in future where signed controls would be exploitable.
Since ActiveX has no "sandbox" mode in which code can be run without requiring full permissions, changing from code signing to SSL would be considerably more disruptive for ActiveX than for Java. It may be that it is more practical simply to abandon use of ActiveX on the Internet, and restrict it to intranet use. This would require more careful consideration of what defines an intranet than in the current implementation of Internet Explorer 4.0 security zones, however. Internet web pages would also have to be prevented from using an OBJECT tag or scripting languages to call an intranet control.
For Java, there is also a problem of incompatibilities between handling of security in browsers from different vendors (e.g. Netscape, HotJava and Internet Explorer). JavaSoft's reference implementation is not sufficient to define a security model. There must be a concerted effort to ensure that different Java implementations are consistent in their treatment of security, so that code written with one implementation in mind does not cause security problems for another.
David Hopwood <[email protected]> |