Miscellaneous

Vendors Patches

Vulnerabilities are common within 3rd party tools and products that are installed as part of the web applications. These web-server, application server, e-comm suites, etc. are purchased from external vendors and installed as part of the site. The vendor typically addresses such vulnerabilities by supplying a patch that must be downloaded and installed as an update to the product at the customer's site.

A significant part of the web application is typically not customized and specific for a single web site but rather made up of standard products supplied by 3rd party vendors. Typically such products serve as the web server, application server, databases and more specific packages used in the different vertical markets. All such products have vulnerabilities that are discovered in an ongoing manner and in most cases disclosed directly to the vendor (although there are also cases in which the vulnerability is revealed to the public without disclosure to the vendor). The vendor will typically address the vulnerability by issuing a patch and making it available to the customers using the product, with or without revealing the full vulnerability. The patches are sometimes grouped in patch groups (or updates) that may be released periodically.

A vendors disclosure policy of vulnerabilities is of primary concern to those deploying ciritcal systems. Those in a procurement position should be very aware of the End User License Agreements (EULAs) under which vendors license their software. Very often these EULAs disclaim all liability on the part of the vendor, even in cases of serious neglect, leaving users with little or no recourse. Those deploying software distributed under these licenses are now fully liable for damage caused by the defects that may be a part of this code. Due to this state of affairs, it becomes ever more important that orginizations insist upon open discussion and disclosure of vulnerabilities in the software they deploy. Vendors have reputations at stake when new vulnerabilities are disclosed and many attempt to keep such problems quiet, thereby leaving their clients without adequate information in asessing their exposure to threats. This behaviour is unacceptable in a mature software industry and should not be tollerated. Furthermore, orginizations should take care to ensure that vendors do not attempt to squelch information needed to verify the validity and effectiveness of patches. While this might seem a frivilous concern at first glance, vendors have been known to try to limit distribution of this information in order to provide "security" through obscurity. Customers may be actively harmed in the meanwhile as Black Hats have more information about a problem than White Hats do, again imparing an organizations ability to assess its risk exposure.

The main issue with vendor patches is the latency between the disclosure of the vulnerability to the actual deployment of the patch in the production environment i.e. the patch latency and the total time needed to issue the patch by the vendor, download of the patch by the client, test of the patch in a QA or staging environment and finally full deployment in the production site. During all this time the site is vulnerable to attacks on this published vulnerability. This results in misuse of the patch releases to achieve opposite results by humans and more recently by worms such as CodeRed.

Most patches are released by the vendors only in their site and in many cases published only in internal mailing lists or sites. Sites and lists following such vulnerabilities and patches (such as bugtraq) do not serve as a central repository for all patches. The number of such patches for mainstream products is estimated at dozens a month.

The final critical aspect of patches is that they are not (in most cases) signed or containing a checksum causing them to be a potential source of Trojans in the system.

You should subscribe to vendors' security intelligence service for all software that forms part of your web application or a security infrastructure.

System Configuration

Server software is often complex, requiring much understanding of both the protocols involved and their internal workings to correctly configure. Unfortunantly software makes this task much more difficult by providing default configurations which are known to be vulnerable to devastating attacks. Often "sample" files and directories are installed by default which may provide attackers with ready-made attacks should problems be found in the sample files. While many vendors suggest removing these files by default, they put the onus of securing an "out of the box" installation on those deploying their product. A (very) few vendors attempt to provide secure defaults for their systems (the OpenBSD project being an example). Systems from these vendors often prove much less vulnerable to widespread attack, this approach to securing infrastructure appears to work very well and should be encouraged when discussing procurement with vendors.

If a vendor provides tools for managing and securing installations for your software, it may be worth evaluating these tools, however they will never be a full replacement for understanding how a system is designed to work and strictly managing configurations across your deployed base.

Understanding how system configuration affects security is crucial to effective risk management. Systems being deploying today rely on so many layers of software that a system may be compromised from vectors which may be difficult or impossible to predict. Risk management and threat analysis seeks to quantify this risk, minimize the impact of the inevitable failure, and provide means (other than technical) for compensating for threat exposure. Configuration management is a well understood piece of this puzzle, yet remains maddeningly difficult to implement well. As configurations and environmental factors may change over time, a system once well shielded by structural safeguards may become a weak link with very little outward indication that the risk inherent in a system has changed. Organizations will have to accept that configuration management is a continuing process and cannot simply be done once and let be. Effectively managing configurations can be a first step in putting in place the safeguards that allow systems to perform reliably in the face of concerted attack.

Comments in HTML

Description

It's amazing what one can find in comments. Comments placed in most source code aid readability and improve documented process. The practice of commenting has been carried over into the development of HTML pages, which are sent to the clients' browser. As a result information about the structure of the a web site or information intended only for the system owners or developers can sometimes be inadvertently revealed.

Comments left in HTML can come in many formats, some as simple as directory structures, others inform the potential attacker about the true location of the web root. Comments are sometimes left in from the HTML development stage and can contain debug information, cookie structures, problems associated with development and even developer names, emails and phone numbers.

Structured Comments - these appear in HTML source, usually at the top of the page or between the JavaScript and the remaining HTML, when a large development team has been working on the site for some time.

Automated Comments - many widely used page generation utilities and web usage software automatically adds signature comments into the HTML page. These will inform the attacker about the precise software packages (sometimes even down to the actual release) that is being used on the site. Known vulnerabilities in those packages can then be tried out against the site.

Unstructured Comments - these are one off comments made by programmers almost as an "aid memoir" during development. These can be particularly dangerous as they are not controlled in any way. Comments such as "The following hidden field must be set to 1 or XYZ.asp breaks" or "Don't change the order of these table fields" are a red flag to a potential attacker and sadly not uncommon.

Mitigation Techniques

For most comments a simple filter that strips comments before pages are pushed to the production server is all that is required. For Automated Comments an active filter may be required. It is good practice to tie the filtering process to sound deployment methodologies so that only known good pages are ever released to production.

Old, Backup and Un-referenced Files

Description

File / Application Enumeration is a common technique that is used to look for files or applications that may be exploitable or be useful in constructing an attack. These include known vulnerable files or applications, hidden or un-referenced files and applications and back-up / temp files.

File /Application enumeration uses the HTTP server response codes to determine if a file or application exists. A web server will typically return an HTTP 200 response code if the file exists and an HTTP 404 response code if the file does not exist. This enables an attacker to feed in lists of known vulnerable files and suspected applications or use some basic logic to map the file and application structure visible from the presentation layer.

Known Vulnerable Files - Obviously many known vulnerable files exist, and in fact looking for them is one of the most common techniques that commercial and free-ware vulnerability scanners use. Many people will focus their search on cgi's for example or server specific issues such as IIS problems. Many daemons install "sample" code in publicly accessible locations, which are often found to have security problems. Removing (or simply not installing) such default files cannot be recommended highly enough.

Hidden / Un-Referenced Files - Many web site administrators leave files on the web server such as sample files or default installation files. When the web content is published, these files remain accessible although are un-referenced by any HTML in the web. Many examples are notoriously insecure, demonstrating things like uploading files from a web interface for instance. If an attacker can guess the URL, then he is typically able to access the resource.

Back-Up Files / Temp Files - Many applications used to build HTML and things like ASP pages leave temp files and back-up files in directories. These often get up-loaded either manually in directory copies or automagically by site management modules of HTML authoring tools like Microsoft's Frontpage or Adobe Go-Live. Back-up files are also dangerous as many developers embed things into development HTML that they later remove for production. Emacs for instance writes a *.bak in many instances. Development staff turnover may also be an issue, and security through obscurity is always an ill-advised course of action.

Mitigation Techniques

Remove all sample files from your web server. Ensure that any unwanted or unused files are removed. Use a staging screening process to look for back-up files. A simple recursive file grep of all extensions that are not explicitly allowed is very effective.

Some web server / application servers that build dynamic pages will not return a 404 message to the browser, but instead return a page such as the site map. This confuses basic scanners into thinking that all files exist. Modern vulnerability scanners however can take a custom 404 and treat it as a vanilla 404 so this technique only slows progress.

Debug Commands

Description

Debug commands actually come in two distinct forms

Explicit Commands - this is where a name value pair has been left in the code or can be introduced as part of the URL to induce the server to enter debug mode. Such commands as "debug=on" or "Debug=YES" can be placed on the URL like:

			http://www.somewebsite.com/account_check?ID=8327dsddi8qjgqllkjdlas&Disp=no
			

Can be altered to:

			http://www.somewebsite.com/account_check?debug=on&ID=8327dsddi8qjgqllkjdlas&Disp=no
			

The attacker observes the resultant server behavior. The debug construct can also be placed inside HTML code or JavaScript when a form is returned to the server, simply by adding another line element to the form construction, the result is the same as the command line attack above.

Implicit Commands - this is where seemingly innocuous elements on a page if altered have dramatic effects on the server. The original intent of these elements was to help the programmer modify the system into various states to allow a faster testing cycle time. These element are normally given obscure names such as "fubar1" or "mycheck" etc. These elements may appear in the source as:

			<!-- begins -->
			<TABLE BORDER=0 ALIGN=CENTER CELLPADDING=1 CELLSPACING=0>>
			<FORM METHOD=POST ACTION="http://some_poll.com/poll?1688591" TARGET="sometarget" FUBAR1="666">
			<INPUT TYPE=HIDDEN NAME="Poll" VALUE="1122">
			<!-- Question 1 -->
			<TR>
			<TD align=left colspan=2>
			<INPUT TYPE=HIDDEN NAME="Question" VALUE="1">
			<SPAN class="Story">
			

Finding debug elements is not easy, but once one is located it is usually tried across the entire web site by the potential hacker. As designers never intend for these commands to be used by normal users, the precautions preventing parameter tampering are usually not taken.

Debug commands have been known to remain in 3rd party code designed to operate the web site, such as web servers, database programs. Search the web for "Netscape Engineers are weenies" if you don't believe us!

Default Accounts

Description

Many "off the shelf" web applications typically have at least one user activated by default. This user, which is typically the administrator of the system, comes pre-configured on the system and in many cases has a standard password. The system can then be compromised by attempting access using these default values.

Web applications enable multiple default accounts on the system, for example:

  • Administrator accounts

  • Test accounts

  • Guest accounts

The accounts can be accessed from the web either using the standard access for all defined account or via special ports or parts of the application, such as administrator pages. The default accounts usually come with pre-configured default passwords whose value is widely known. Moreover, most applications do not force a change to the default password.

The attack on such default accounts can occur in two ways:

  • Attempt to use the default username/password assuming that it was not changed during the default installation.

  • Enumeration over the password only since the user name of the account is known.

Once the password is entered or guessed then the attacker has access to the site according to the account's permissions, which usually leads in two major directions:

If the account was an administrator account then the attacker has partial or complete control over the application (and sometimes, the whole site) with the ability to perform any malicious action.

If the account was a demo or test account the attacker will use this account as a means of accessing and abusing the application logic exposed to that user and using it as a mean of progressing with the attack.

Mitigation Techniques

Always change out of the box installation of the application. Remove all unnecessary accounts, following security checklist, vendor or public. Disable remote access to the admin accounts on the application. Use hardening scripts provided by the application vendors and vulnerability scanners to find the open accounts before someone else does.