10 Important Facts About Website Security and How They Impact Your Enterprise

abdomendebonairΑσφάλεια

2 Νοε 2013 (πριν από 3 χρόνια και 9 μήνες)

94 εμφανίσεις

10 Important Facts About Website Security
and How They Impact Your Enterprise

January 2011

Jeremiah Grossman

Founder and CTO, WhiteHat Security
A WhiteHat Security Whitepaper
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
2
Introduction
Websites are now the number one target of choice for attacks by hackers. Their attacks have moved from the well-
defended network layer to the more accessible Web application layer that people use every day to manage their lives
and transact business. The sites where consumers shop, bank, manage their healthcare, pay insurance, book travel and
apply to college are now under a near-constant barrage of attacks intent upon stealing their credit card numbers and
other personal / private information.
The 2010 Verizon Data Breach Investigation Report confirms that the majority of breaches and almost (95%) of the
data stolen in 2009 was perpetrated by remote organized criminal groups hacking “servers and applications.” When
companies lack adequate protection and security for their websites the results are clear: Theft of data, malware
infection, loss of consumer confidence, and failure to meet regulatory requirements. Certainly, no company today can
afford the reputation that its websites are open to hackers. And with many states, the federal government, and the
payment card industry mandating full disclosure, it is unrealistic and extremely risky to merely hope that a hacker will
attack someone else’s website.
How can companies prevent attacks on their websites? The first step is to understand the fundamentals of Web
security. This white paper will examine 10 vital website security issues that affect software developers and information
security professionals.
Understanding these issues will enable companies to understand the seriousness of the current security problems, and
then to establish methods for managing vulnerabilities and developing an overall strategy for website risk security.
Overall, readers can consider the 10 issues presented here as a first step in the exploration of website security that can
successfully prevent organizations and their customers from becoming victims of malicious hacking.
10 Things You Should Know About Website Security
1. Cloud Computing Has Abstracted Network and Perimeter Defenses
The overwhelming enthusiasm for cloud computing is loud and clear in the information security industry. However,
what security professionals now understand, while many business people and CIOs currently overlook, is that cloud
computing is a completely new way to measure and provide effective security.
With the development of the cloud, traditional security measures such as firewalls, SSL and encryption methods have
become less effective. In fact, in cloud computing the network layer is abstracted. This means that while the structure of
the cloud insulates users from technical complexities, it also places more responsibility for security onto the owners of
individual websites. Overall, this structural change has led forward-thinking companies to focus on the security of their
website data and applications.
For example, cloud computing means that a company can no longer promise customers that the locked-down perimeter
of its website provide an ultimate defense against hacking. Instead, as hundreds of millions of people worldwide use
the Internet to bank, shop, purchase goods and services – or even do simple online search – every online activity
makes their private information openly available. This data, including names, addresses, phone numbers, credit / debit
card numbers, and passwords, are routinely transferred and stored in a variety of unspecified locations.
To enable this “legitimate” flow of information, organizations that handle customer activity on the Internet must open
up their firewalls, the devices that once promised to offer impenetrable protection. Billions of dollars, and millions of
personal identities and their corresponding private information, are exposed to hackers who can bypass security or
discover vulnerabilities in security systems – even in Web applications featuring custom security measures.
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
3
From a security perspective, firewalls and SSL offer little protection. Web traffic often contains attacks such as Cross-
Site Scripting (XSS) and SQL Injection, both of which can enter through port 80/443 and therefore pass through
the firewall. Also, contrary to a popular misconception among many security providers, SSL can only safeguard data
in transit. SSL cannot secure a website. Once data is resident on a Web server, that data can be compromised
regardless of whether SSL is in use.
Providing website security is a specialized skillset that focuses solely on the custom Web applications residing on
corporate Web servers. Conversely, network scanning provides only vulnerability data for packaged, off-the-shelf
applications. Therefore, for applications developed in-house, which is the standard practice on the vast majority of
websites, a custom security approach is required to stop the attacks that can easily bypass the network perimeter of
individual sites.

2. Over 80% of All Websites Have Serious Security Vulnerabilities
WhiteHat Sentinel, WhiteHat’s website vulnerability management platform, assesses the security of the largest
companies in e-commerce, financial services, technology, and healthcare. Data from WhiteHat Security’s Website
Security Statistics Report confirm that 83% of websites have at least one serious vulnerability.
With this finding that at least 8 of 10 company websites are vulnerable, and an increase in regulations requiring the
disclosure of any security breach, it’s easy to see that even a single vulnerability can be more costly than ever. Although
it may be hard to imagine, just one hacker exploiting one vulnerability can access customer account data, execute
administrative level functions, defraud the business, or halt a website’s operations completely.
Clearly, organizations that use the Internet must mitigate vulnerabilities and carefully manage their security risks in order
to avoid these problems and the additional damages they cause to a company’s business reputation.
The Web Application Security Consortium (WASC) identifies almost fifty unique classes of website vulnerabilities.
Within these classes are vulnerabilities that range from the common, such as SQL Injection or Cross-Site Scripting, to
the obscure, such as Abuse of Functionality or Insufficient Process Validation. The most important thing to remember
regarding the many types of vulnerability is that Web application vulnerabilities are specific to an organization’s website.
The Web 2.0 functionality in social networking and similar applications has also increased the opportunities for
exploiting vulnerabilities. Furthermore, basic network vulnerability management solutions fail to identify these types of
flaws in custom website code.
Overall, every industry is at risk to the potential threat of unremediated vulnerabilities. Because the financial services
industry was among the first to experience an onslaught of website attacks, they were the first companies that used
security professionals to aggressively implement website security. Because of this aggressive action, the websites
of financial services companies have, in general, the fewest number of serious vulnerabilities among major vertical
markets. Conversely, the educational, retail, and information technology markets remain at much greater risk to
breaches in their security.

10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
4
3. Faulty Input Validation is the Leading Cause of Website Vulnerabilities
Input Validation is the most important aspect of Web Application Security. User-supplied input can be trusted or, more
specifically, used only after the integrity of the data is validated. User-supplied input includes query strings, post data,
cookies, referrers, and other information originating from outside the website.
Using data, only after first validating it, is the most important lesson for developers to learn in the creation of secure code.
No other defense will substitute for the validation process. By conducting proper input validation, the quality of both an
organization’s security and code can be dramatically improved, and most attacks on a site can be avoided.
Guidelines for User-Supplied Input

Character-set:
Accept only data containing a strictly limited and expected set of characters. If a number is expected,
accept only digits. If a word is expected, accept only letters.

Data Format:
Accept only data containing the proper format. If an email address is expected, accept only letters,
numbers, the “@” symbol, dashes, and dots that are presented in the standard arrangement. This restriction on
accepting only input presented in the proper format includes enforcing the minimum and maximum length of
characters allowed on all incoming data. This data-screening technique should be used for all account numbers,
session credentials, usernames, etc., because it will limit the potential entry points for incoming attacks.

Escaping:
All special characters from incoming data should be escaped in order to remove an additional
programmatic meaning.
4. Defense-in-Depth Protection is Necessary
As reports in local, national and international media so often confirm, hackers can successfully attack even companies
with vast resources and large Web security teams. If such large and high profile organizations can become victims of
hacking, how can other online businesses with limited Web security resources protect themselves and their customers?
The answer is Defense-in-Depth.
Defense-in-Depth is a highly practical approach to information security that, when implemented successfully, provides
greater security than a single-point solution. Layers of security are used in this approach, and may include input validation,
database layer abstraction, server configuration, proxies, Web application firewalls, data encryption, OS hardening, etc.
Once in place, these layers must be tested frequently to assure that they remain secure.
Organizations using Defense-in-Depth deploy security solutions at various layers, so that if a single layer is breached,
another security layer is in place to prevent the site from being compromised. Thus, Defense-in-Depth significantly
reduces the risks associated with security breaches.
One of the most effective Defense-in-Depth techniques that ensures website security is combining regularly scheduled
website vulnerability assessments (VA) with a Web Application Firewall (WAF). The VA+WAF method lets organizations
virtually patch website vulnerabilities as they are identified, which provides more time for code remediation. VA+WAF
technology provides companies with an efficient and accurate way to anticipate and defeat attacks on their websites.
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
5
When Implementing a VA+WAF Solution, Users Should Keep the Following Key Points in Mind:
• Confirm that the solution is safe for your production websites.
• Measure the accuracy of the vulnerability assessment component. This assures that “legitimate” traffic flows freely
rather than being disrupted or stopped by false positives.
• Demand focused reporting. The dynamic analysis solution must isolate a flaw to the level of a specific vulnerable
URL, an exact parameter name, and the type of attack.
• Maintain a routine assessment schedule. This is absolutely essential in order to keep up to date with additions
or changes to website code, and because hackers are using new techniques to attack websites and those new
attacks are occurring more frequently.
5. Many Vulnerabilities in Production Sites Originate in Areas Other Than Development Code
One approach to identifying website security vulnerabilities in software is to examine the code – before deployment
– for risk-prone operations. While this process is very valuable, it alone is insufficient to provide a timely or complete
picture of security.
For instance, the execution structure of the code might be immediately visible; and functionality interplay with other
parts of a Web application might introduce new vulnerabilities. Overall, the more complex a system is, the more likely a
vulnerability will remain hidden.
It is difficult, if not impossible, to maintain perfect synchronization between production systems and quality assurance
(QA) systems. This difficulty presents developers and security professionals a unique challenge. One solution to the
problem is WhiteHat Sentinel Service, which routinely identifies logic flaws, forgotten backup files and debug code,
and configuration differences between various systems.
Based on providing thousands of website vulnerability assessments, WhiteHat recommends measuring the security of
websites both before and after new code is released. This policy ensures ongoing, full protection of a website.
6. Black Box and White Box Assessment are Complementary
A long-standing debate within the Web application security community has centered on which software-testing
methodology is most effective: Is it “Black Box,” including vulnerability assessment, dynamic testing, and run-time
analysis; or “White Box,” including source code review, and static analysis.
For those who are strict adherents to the secure coding doctrine, white box testing has been the solution of choice.
However, black box advocates have championed the value of testing production websites and identifying actual
vulnerabilities on live sites.
The security community is now realizing that combining the two methods may yield the most comprehensive results.
However, an important element of the debate continues to be the investment of time, money, and skill each method
requires – based on the absolute necessity to determine the value of each investment to the overall security posture of
the organization.
Black box testing and white box testing measure very different things. Overall, when both types of testing are properly
conducted, and remediation is completed – using either code remediation or blocking – an organization should see
a reduction in vulnerabilities and an improved security posture. Furthermore, the organization will have a much clearer
perspective on the best methods to use for managing its website risk.
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
6
Given the more effective performance of the combined black box / white box method, it is still important to understand
the individual characters of each approach.
Black box vulnerability assessments measure the “hackability” of a website, based on a specific attacker’s resources,
skill, and intent. All websites are potential attack targets, so it is essential for website owners to know the defects in their
sites before attackers do. Based on these types of measurements, black box vulnerability assessments are outcome-
based metrics that measure the security of a system when all security safeguards are in place.
White box source code reviews, on the other hand, measure the number of security defects – and/or help reduce them –
in an application’s current software development lifecycle. Therefore, because software will always have bugs, it is best
to minimize them in order to increase software assurance.
Overall, just as there is little or no gain in comparing the value of network pen testing against patch management – or
firewalls against IPS – there is also little gained by comparing the value of black box testing to white box testing.
Initially, understanding what you want to measure should be the first consideration in deciding which testing methodology
to use. What WhiteHat has learned – and which is similar to what the network security world reports – is that particularly
in a SaaS delivery model the combination of black box and white box testing successfully addresses a number of
underserved customer-use cases. These include third-party validation and testing of COTS.
The integration of black box and white box testing is already yielding a significant level of vulnerability correlation, even
being able to identify a line or code block. This capability helps prioritize findings into actionable results – such as
knowing which vulnerabilities an attacker can confidently exploit.
Looking ahead, it is possible that static analysis will be able to measure exactly how code coverage is being realized
during the dynamic analysis. And furthermore, that static analysis will be able to indicate the gaps in unlinked URLs, back
doors, extra form parameters, etc. These capabilities will lead to markedly better, measurable comprehensiveness for both
static and dynamic analysis.
7. Attackers Win When Security Controls Refuse to Focus on the Threat
In most cases, organizations spend IT security dollars protecting their assets from yesterday’s attacks, at the network /
infrastructure layer, while overlooking today’s real threats. The 2010 study, “The State of Application Security,” conducted
by the Ponemon Group, found that most businesses, despite having numerous mission-critical applications accessible
via their websites, fail to allocate sufficient financial and technical resources to secure and protect Web applications, thus
leaving their corporate data vulnerable to theft.
According to the study, the majority of respondents believe that insecure Web applications present the greatest threat
to corporate data. However, 70% noted that their organizations fail to include application security as a strategic initiative.
They also report that their organizations were budgeting insufficient resources to protect their companies from risk due to
Web application threats.
The study also found that only 18 percent of IT security budgets were used to address the threat posed by insecure Web
applications; while 43 percent of IT security budgets were spent on securing networks and host sites – two locations
that the respondents were least concerned about in regard to protecting corporate assets.
Clearly, it is essential to a company’s well being that its executive officers recognize the importance of website security.
Then they must sufficiently fund their IT operations to match that level of strategic importance. Because until a company’s
website security is properly funded, attackers will continue to make huge amounts of money – illegally – at the expense of
a company’s legitimate profits.
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
7
8. Despite the Most Regimented SDL, All Software Is Flawed
When asked, most security experts advocate security in the SDL, and for good reason. Just consider how long it can take
to remediate flaws discovered in production code. Security experts also know that attacks continue to occur on those
very applications when they remain vulnerable to exploitation.
Important as the SDLC process is, it often fails to recognize unknown attack techniques, as well as known techniques
that are “flying under the radar,” and the massive amounts of old insecure code currently in circulation. With 273 million
websites live, and new code being added every day, relying on secure coding or SDLC to solve security issues is
completely unrealistic. And because we have failed within the industry to realize the limitations of security in the SDLC,
we have been unprepared for the problems the limitations created. Furthermore, being unprepared, we inevitably have
paid a high price for that failure.
Secure code is only secure, if there is such a thing as “secure code,” for only a period of time – and that period of time is
impossible to predict. This is the ongoing dilemma.
We can’t future-proof our code. New attack techniques are always being developed and quickly introduced on the Web.
And existing attack techniques gain strength.
Secure coding “best practices”, even if they are perfectly implemented, typically account only for the attack techniques
already known. So when new attacks occur, enterprises may suffer. For example, after years of being aware of XSS, SQL
Injection, and CSRF threats, the industry is still struggling with the damage they can do. Furthermore, no development
team has the time to review all existing code for vulnerabilities to these types of threat.
Therefore, Web security must been seen in a new way. It must be viewed in a way that accepts the fact that neither code,
nor developers, will ever be perfect – nor should we ever expect any code or its developers to be perfect. To compensate
for fallibility we need solutions, including Web application firewalls (virtual patches), that “wrap around” our code to
protect it. The professionals responsible for securing a website also need new solutions, products and / or strategies that
can identify emerging threats, that allow a much faster response to them, and that adapt more quickly to the constantly
changing landscape of the Web.

9. Resolving Website Security Issues Requires Updates to Custom Code
In general, virtually all security professionals understand that network vulnerabilities differ from Web application
vulnerabilities, and that the differences between them become even more apparent when one considers the work
required to fix each type. While most security professionals are also familiar with the patches available for network
vulnerabilities, some may be unaware that each vulnerability fix on a website requires updates to custom code.
Furthermore, each repair of this type requires a code push that can introduce a new vulnerability. So, while it is correct
that there may be fewer Web application vulnerabilities, remediating each vulnerability has become both more complex
and more time consuming. Therefore, in order to maintain secure applications it is essential to continuously assess the
impact of each vulnerability that is fixed.
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
8
10. Website Security Is More Than Counting Vulnerabilities
Website security means more than knowing the total number of vulnerabilities currently threatening a company’s
websites. It is also about managing risks. That’s why website security data is also essential to a company’s auditors,
compliance personnel, product management, and developer organizations.
And because there are no independent software vendors pushing out standard patches for commercial products, the
rules of traditional OS are insufficient, inadequate, and simply fail to work when applied to website security issues.
With rare exceptions, each website is running on unique, custom code. Furthermore, websites themselves are even
more unique because they are, by design, open and available to the public – and therefore, of course, to hackers.
A website risk management program delivers the most value to an organization during the post-deployment, or
operational, phase of the application lifecycle. The operational phase of an application’s life is, by far, the most
important phase and, generally, is of the longest duration. Production websites are also the most prevalent attack
targets, so the majority of an organization’s security resources should be used to protect them.
To be successful, companies with an online presence need a common sense plan that includes a carefully
considered risk-based website security strategy. It must be a strategy that can implement solutions at both the right
time and the right place in order to: (1) maximize returns, (2) demonstrate measurable successes, and (3) justify the
investment of website security in a language that an executive staff understands.
Such a strategy can begin by addressing the most common question, “Where do we start?” One simple, but
powerful answer is to first locate, measure the value, and prioritize the importance of a company’s websites that are
already in place.
Next is to provide business units the information they need to resolve relevant Web security issues, such as “What
do we need to be concerned about most?”
Finally, to then determine the most likely security threats – based on: (1) the type of people who are likely to attack
the enterprise’s sites, (2) the capabilities of these individuals or groups, (3) their motives for attacking, and (4) the
vulnerabilities that they are most likely to target.
Only when a company knows the extent of its Web investment, the worth of that investment, and what is required
to protect that investment – based on the company’s overall business objectives – can adequate and dollar-wise
security be applied.
10 Important Facts About Website Security and How They Impact Your Enterprise
| January 2011
9
Conclusion
Several hundreds of factors, rather than dozens, must be understood in order to build and maintain an effective website
security program. However, the most critical factors to consider when creating a website risk management strategy
have been presented in this white paper.
Whether a company is evaluating its website security for the first time, or has had one-time assessments performed by
consultants, or uses a vulnerability scanning tool, the three essential requirements to assure protection from website
attacks and the vulnerabilities they create are accuracy, comprehensiveness and consistency. It is also important to
integrate website security into an organization’s overall security planning.
Website risk management requires ongoing attention to risks. In fact, with the frequency of attacks increasing each day,
and new attack methods being introduced almost as quickly as existing methods are discovered and defeated, every
enterprise needs to develop a comprehensive plan to defeat website threats.
In order to create such a plan, information security and software development teams must identify website
vulnerabilities during both website development and production, mitigate them quickly and efficiently, share the data
within the organization, track the progress of fixing the vulnerabilities, and provide management with updates of the
security posture as needed.
The WhiteHat Security Approach. WhiteHat Security has developed a four-phase approach that enables enterprises to
control website risks from both strategic and tactical perspectives.
The approach ensures that organizations can take the first steps toward benchmarking their website security posture,
so that everyone with the “need to know” in that organization can easily measure the effectiveness of the website
security program. WhiteHat Sentinel service makes this “Access to Essential Information” level of website vulnerability
management possible. Sentinel’s “Open XML” application programming interface (API) enables sharing of website
vulnerability data across the organization, as well as the integration of bug tracking systems with security information
management systems, and both IPS/IDS and Web application firewalls.
WhiteHat Security, Inc.

|
3003 Bunker Hill Lane
|
Santa Clara, CA 95054


408.343.8300
|
www.whitehatsec.com

Copyright © 2011 WhiteHat Security, Inc. | Product names or brands used in this publication are for identification purposes
only and may be trademarks of their respective companies. 012711
About the Author
Jeremiah Grossman is the
Founder and Chief Technology
Officer of WhiteHat Security,
where he is responsible for
Web security R&D and industry
evangelism. Mr. Grossman has
authored dozens of articles and
white papers, credited with the discovery
of many cutting-edge attack and defensive
techniques, and is a co-author of “XSS
Attacks: Cross Site Scripting Exploits and
Defense.” His work has been featured in the
Wall Street Journal, NY Times, USA Today,
Washington Post, NBC News, and many
other mainstream media outlets.
As a well-known security expert and industry
veteran, Mr. Grossman has been a guest
speaker at hundreds of international events
including BlackHat Briefings, OWASP, RSA,
ISSA, SANS, Microsoft’s Blue Hat, and many
others. Mr. Grossman is also a co-founder
of the Web Application Security Consortium
(WASC) and previously named one of
InfoWorld’s Top 25 CTOs.
Before founding WhiteHat, Mr. Grossman
was an information security officer at Yahoo!,
where he was responsible for performing
security reviews on hundreds of the
company’s websites.

About WhiteHat Security, Inc.
Headquartered in Santa Clara, California,
WhiteHat Security is the leading provider
of website security solutions that protect
critical data, ensure compliance and narrow
the window of risk. WhiteHat Sentinel, the
company’s flagship product family, is the
most accurate, complete and cost-effective
website vulnerability management solution
available. It delivers the flexibility, simplicity
and manageability that organizations need to
take control of website security and prevent
Web attacks. Furthermore, WhiteHat Sentinel
enables automated mitigation of website
vulnerabilities via integration with Web
application firewalls.
The WhiteHat Sentinel Service
WhiteHat Sentinel is the most accurate, complete and cost-effective
website vulnerability management solution available. It delivers the
flexibility, simplicity and manageability that organizations need to take
control of website security and prevent Web attacks. WhiteHat Sentinel
is built on a Software-as-a-Service (SaaS) platform designed from the
ground up to scale massively, support the largest enterprises and offer
the most compelling business efficiencies, lowering your overall cost of
ownership.
Unlike traditional website scanning software or consultants, WhiteHat
Sentinel is the only solution to combine highly advanced proprietary
scanning technology with custom testing by the Threat Research Center
(TRC), a team of website security experts who act as a critical and integral
component of the WhiteHat Sentinel website vulnerability management
service.
Scalable
WhiteHat Sentinel was built to scale and assess hundreds, even thousands of
the largest and most complex websites simultaneously. This scalability of both
the methodology and the technology enables WhiteHat to streamline the process
of website security. WhiteHat Sentinel was built specifically to run in both QA/
development and production environments to ensure maximum coverage with no
performance impact.
• Designed to scale and assess the largest and most complex websites
simultaneously
• 3,000+ websites under management
Accurate
Every vulnerability discovered by WhiteHat Sentinel is verified for accuracy (by the
TRC) and prioritized, virtually eliminating false positives and radically simplifying
remediation. So, even with limited resources, the remediation process will be sped
up by seeing only real, actionable vulnerabilities.
• The WhiteHat Sentinel remediation process identifies only real vulnerabilities, so
you get more accurate results faster than other security solution can provide.
Predictable Costs – Unlimited Assessments
All the costs involved in building a scalable infrastructure and technology are built
into the WhiteHat Sentinel Service. So, a company does not have to bear the
burden of an upfront investment in hardware, software and personnel.
• No headcount required – scanner configuration and management run by
WhiteHat’s Threat Research Center (TRC)
• No hardware or scanning software to install