June 28, 2011, 6:03 p.m.
posted by creed
The false sense of security that exists within the IT industry has spawned the much-needed profession of Web application security and penetration (pen) testing. The focus of this book is web-based application penetration testing and how the results of this can lead to a layered approach of enhanced core-level security coupled with specialized edge security for Web applications. The benefits of being a programmer doing pen testing are due to the deeper levels of app understanding gained through this practice. But other IT professionals will get eye-opening information and education as well.
Application pen testing is a critical discipline within a sound overall IT strategy. But there is a serious problem: the shortage of those with the necessary skill set to properly pen test software, N-tier, and distributed applications. Experience has shown that the typical security specialist or engineer simply lacks the depth of application and software knowledge necessary to be entirely effective when performing these types of audits. Knowledge of code is absolutely necessary to do this effectively.
The present state of affairs with Web applications, and corporate software on the whole, is one of quasi-mystery. There is a disturbing gap in the industry between the programming community (which focuses on solid functionality) and the security community (which focuses on protection at the network level and policy). The gap exists because, while programmers and web developers are traditionally focused on the functionality of apps running critical processes, and network and security professionals are traditionally edge and possibly even host specialists, no one is looking for security holes in the application code. The mystery, then, is who properly secures the Web apps? The experience of programmers has been that security is not a priority in their application development workloads.
Many businesses do not have in-house application skill sets or resources. Although they have apps that are, and have been, running their business in the background — and from a business perspective everything functions as expected — these apps have typically been outsourced or off-shored for development. This means that there is very little application knowledge on staff. Because of that, one of the things regularly encountered out in the field is very old and unpatched versions of software. This is a huge problem because the most up-to-date software out there typically has enhancements and fixes built into it. But many entities will not always keep up with the latest and greatest software due to a lack of in-house knowledge and experience.
A further consequence of having applications that no one understands is that the folks in-house that are held responsible for these applications refuse to touch them for fear of breaking them. Things that are seemingly simple, like applying server patches, could conceivably wreak havoc on an application. If they do apply the patch, library dependencies can end up broken and the app could start spitting out nasty errors. Anyone who witnesses a fiasco like this makes a clear mental note not to repeat that mistake, which leads to an unwritten no-touch policy.
Sometimes it is not even a human process that causes a problem. If a server runs long enough without being turned off, there is no guarantee it will come back up after a shutdown. The average professional in the IT industry will not want to be the individual that powered that server down. They don’t want to deal with the repercussions if the shutdown causes an app to stop working. So there is a distinct preference not to touch anything that is perceived as not broken irrespective of the associated risk.
Many edge-level techniques and tactics leave great areas of risk exposed. For example, the functionality of IDS systems is impressive, but what they watch is dictated by what they are taught. However, the critical question remains, who is doing the teaching? Do they properly understand what they are looking for? Assuming they do, someone or something then has to make sense of the massive amounts of data these systems typically capture. It is an intensive, time-consuming process that in the real world has proven to be a “nice to have,” yet the true real-world security value is arguable. IPS systems come with their own set of challenges and weaknesses. The point is that when it comes to Web apps, there are weaknesses, and risk is generally present.
IDS (http://en.wikipedia.org/wiki/Intrusion_Detection) and IPS (http://en.wikipedia.org/wiki/Intrusion-prevention_system) systems are network-level devices that aim to enhance an overall security posture.
Awareness in this area is growing, though. Evidence of this can be seen in movements like these:
Open Web Application Security Project (OWASP) - OWASP (http://www.owasp.org) is dedicated to helping build secure Web applications.
Web Application Security Consortium (WASC) - WASC (http://www.webappsec.org) is an international community of experts focused on best-practices and standards within the Web application security space.
One emerging, very interesting area is that of Web Application Firewalls (WAF). These are devices or software entities that focus entirely on the proper protection of Web applications. These WAF solutions are intended to fill the gap I spoke of earlier. They are capable of properly preventing attacks that edge/network-level firewalls and IDS/IPS systems can’t. Yet they operate on the edge and the app’s source code may not get touched. You can get information at http://www.modsecurity.org and http://www.cgisecurity.com/questions/webappfirewall.shtml.
As we all know, not all products are created equal, and this is especially so in software. Ivan Ristic and the WASC are going to the great length of formalizing the evaluation criteria of these types of solutions so that anyone can at least have a baseline understanding about the effectiveness of a given solution. As stated on their web site: “The goal of this project is to develop a detailed Web application firewall evaluation criteria; a testing methodology that can be used by any reasonably skilled technician to independently assess the quality of a WAF solution.” They have a strong Web app security presence within the consortium, and the criteria seem solid. You can learn more at http://www.webappsec.org/projects/waf_evaluation/.
Because the security industry is predominately staffed with network professionals who have migrated to security via firewall, IDS, IPS, work, and so on, it is easy to find a great many misconceptions in the way application pen testing is described and practiced. For starters, there is sometimes an unfortunate misconception that pen testing doesn’t follow any disciplined methodology. The thinking is that there is a strong dependence on the tester’s experience, and therefore there is a direct correlation between that experience and the difficulty of the specific target. However, even with a good store of relevant experience, without a defined methodology it is easy to make mistakes, generate inaccurate results, waste time, waste money, and finally lose the client’s (when servicing external clients and not auditing your own shop) confidence that they will receive an excellent end-result product.
Now, some entities performing this type of work operate with no methodology and this is certainly not a good practice. It is almost as bad as using a methodology that is far too general, or pen testing based on the instinct or knowledge of specific individuals. A common case of this sort of flawed approach to pen testing can be seen in methodologies that preach information gathering, penetration, and documentation. Unfortunately this methodology is pervasive. Far too many companies and IT personnel have the erroneous idea that a penetration test constitutes nothing more than running a security scanner and getting a nicely formatted report with colorful charts at the end of the run. The main fault with this model is that the results depend only on how many problems were discovered, which depends on what the scanner has been taught by its programmers, and that depends on the experience and knowledge base of those particular programmers and possibly some analysts associated with the project. To view one of those reports as complete and comprehensive is a grave oversight to anyone familiar with the way security scanners and other automated tools work and who truly know about the false-positives and false-negatives they produce.
There are countless examples of sloppy and inaccurate pen tests done by big and small consulting companies that lack the right skill set, depend on automated tools exclusively, or both. The reports they provide are based on the superficial results of automated tools without any deeper analysis. This is downright irresponsible and potentially leaves a trusting client needlessly exposed because the tool(s) used, and the people using them, may have missed something.
There is great value in coupling Web app pen tests with other social engineering efforts, such as shoulder surfing. Even though this book exclusively focuses on the technical aspects of Web applications, the value of a successful social engineering campaign cannot be negated. Many times the app security experts are fed information from the social engineering efforts of others on a Tiger Team (http://en.wikipedia.org/wiki/Tiger_team). It is a flow of information that could certainly speed up the whole process; just don’t rely on it because without it you still have to get results. It is also your responsibility to feed back to the rest of the team any data you discover that may be relevant.
There is much more to application pen testing than blindly running a few tools and producing a report. It is imperative that organizations make themselves aware of their risk level in the arena of web technologies. Acting on the awareness is not only critical but now is becoming a legally-based demand. Web-based vulnerabilities and the potential attacks and exploits are growing at alarming rates and they require attention today. The business-related consequences for any organization doing business on the public Internet who fails to take application and data security seriously could be devastating, especially considering the repercussions of non-compliance within areas such as Sarbanes-Oxley.
Organizations need to implement awareness programs through effective pen testing, and they need to implement solutions based on the results of that testing. To protect against potential attackers and breaches, a proactive, layered defense strategy is a must. Truly thorough defensive postures can always beat out the offense in these scenarios because there will just be an easier target elsewhere. This works out cleanly when security is implemented as a legitimate area of attention within a given project’s Software Development Life Cycle (SDLC). More often than not, security is not implemented during the SDLC and so an objective assessment is in order via external (to the target entity) penetration testing. This book provides you with the necessary knowledge and tools to execute this objective assessment.