IT Security: Watch Out for Insecure Software

CIOs can gain better data protection by ensuring applications are built securely.

by / July 28, 2008

An extended strike of cyber-attacks in 2003 that allegedly originated in China succeeded in penetrating several U.S. government and contractor networks, the breadth of which took many security professionals by surprise. Mainstream-media outlets raised the cyber-attacks' news profile, using the moniker "Titan Rain" assigned by federal investigators.

Many in the federal government, and others in the security industry, were convinced the attacks were the work of Chinese government cyber-espionage experts because of the attackers' apparent origins, the targets themselves and the potential intents of the attacks. The attackers systematically probed U.S. networks for vulnerabilities and exploited weaknesses to expose and capture sensitive government information -- an accusation Beijing flatly denies. The federal government quickly classified its investigation and pursued the hackers in secrecy.

While the Titan Rain attacks aren't unique, they serve to illustrate that the profile of today's hackers has matured. The days when attackers were categorized as amateurs content with defacing Web sites are over. Cyber-espionage specifically targets sensitive military and business information at agencies, including the U.S. Department of Defense and NASA, as well as sophisticated criminal attacks on state and local government databases. Foreign governments have highlighted a similar concern about the new depth and frequency of attacks they experience from non-native sources. Regardless of the source or motivation behind these attacks, one thing is clear -- new and innovative threats are raising concerns about the safety of our nation's most sensitive data.

Beyond threats to mission and operational strategy, defense-system schematics, and other national-security data, there is tremendous value in the sale of sensitive personal information. While some hackers target classified government data, others value personally identifiable information, such as Social Security numbers, credit card numbers and bank account details. Attackers routinely search for vulnerabilities in computer systems and applications that will expose confidential information. The relative value compared to the risks involved is clearly in the eye of the beholder.

Over the last several years, attacks have matured, generating more intelligence and offering a deeper level of access into critical business systems. The increasing speed of information exchange and the drive to integrate partner systems makes this issue even more urgent. Government agencies face unique issues related to national security, while businesses and governments face the difficulties of IT cleanup, legal fees, notifications, lost confidence and an increased customer service load.

To many security professionals, the identity and motivation of hackers is less important than identifying, prioritizing and eliminating the overall risk to their organizations caused by software-security vulnerabilities. A pervasive lack of consistent security exists within applications throughout almost every organization, which virtually ensures attackers' success.

Despite the different types of hackers and the varying data targets they seek, hackers rely on similarly malicious technologies to retrieve information. Hackers worldwide are inventing and executing new exploits and techniques to circumvent today's security technologies in their efforts to break the weakest links in the security chain. Some hackers collaborate -- sharing or finding tips and tricks on the Internet -- while others work alone, hoping to identify and capitalize on unexposed vulnerabilities or design flaws before countermeasures can be created.

Taking Aim at Insecure Software
Many of today's hackers seek the path of least resistance and aim first for low-hanging fruit. As private-networking technologies have become more widely adopted and networking security has improved, hackers increasingly have turned to the least secure targets within organizations -- software applications. Analysts estimate that applications experience almost 75 percent of all new attacks.

Today's end-users are bombarded with malware, viruses, phishing attacks and other social engineering attempts, and systems are infected with root kits, keystroke loggers, logic bombs and spyware. The most successful attackers combine the latest tactics with rapid exploitation of newly discovered security weaknesses, taking advantage of busy network and system operators who are often one step behind.

On a global front, the uneven attention the international community gives to cyber-attacks means the U.S. government and U.S. businesses cannot always rely on the timely cooperation of foreign governments to help block ongoing attacks when their origin is identified. There is only a middling expectation of success as they attempt to track down attackers after the breach has occurred.

The combination of sophisticated, multifaceted threats and the insufficiency of a solely response-based defense posture has led to an evolution in thinking about protection. As a result, the best defense against the theft of sensitive data is to understand and ensure the security of the applications themselves -- a hardening of the weakest links in the security chain.

Rather than taking a solely reactive stance on security breaches, those who are in charge of protecting sensitive data must turn their attention to shoring up weak applications before they become liabilities. In much the same way industry best practices and regulations are driving more rigorous examination of how applications treat sensitive data, government must also move to the next level of understanding and assurance.

Making Secure Software an Agency Priority

Fortunately the government is beginning to make the connection between national security, data security and application security. U.S. Homeland Security Secretary Michael Chertoff, while admitting that the United States still has significant work to do in cyber-terrorism defense, has worked closely with the White House in identifying new areas for investment, including a recently added $6 billion line item to build a system to protect against emerging digital threats.

While that effort continues, responsible security personnel must recognize that this problem can't wait. Government information security managers must factor application security into their overall risk-management operation, allowing them to adopt an appropriate and layered security approach that recognizes the unique vulnerabilities and requirements of each area of the infrastructure, from perimeter to individual applications. In so doing, they can offer appropriate safeguards and protection of sensitive data at each level.

To get serious about information defense, security managers must identify and address vulnerabilities at the least understood and least protected point in their systems. Project teams must treat, with skepticism, every application that they commission, create or reassess. Development teams must assume every existing application and application under development is a security risk until proven otherwise.

Outsourcers must be given more specific contractual guidance on the types of security expected from them and be held accountable for the security of the systems they deliver. In order to protect critical and sensitive information in advance of growing digital threats, CIOs and IT managers must think like hackers and evaluate their own applications before cyber-hackers have the opportunity to exploit weaknesses. Integrity needs to be ensured across the entire enterprise for existing legacy applications, applications under development and outsourced applications.

Finding the Source of Risk
The path to effective, secure software development requires source-code review processes that accomplish three things:

o create consistent processes, policies and a culture of improved security.
o provide the whole security picture: When it comes to dangerous vulnerabilities, large-scale design flaws typically trump individual coding errors. Fixing individual vulnerabilities will have little effect if data is not encrypted, authentication is weak or the application has open backdoors.
o prioritize remediation: When reviewing existing code, developers must identify all vulnerabilities in the code and then remediate the greatest risks first.

To effectively measure the risk posed by any application, CIOs, security analysts or developers should watch closely for two types of errors: implementation flaws, which are the most familiar; and design flaws, which pose the greatest risk in today's Web-enabled applications.

o Implementation Flaws: These quality-style defects in code are fairly atomic and typically stand alone when identified and remediation is applied. They are caused by poor programming practices. Examples are buffer overflows, which result from mismanagement of memory, and race conditions, which result from call-timing mismatches.
o Design Errors: These include the failure to utilize or adequately implement security-related functions. This includes authentication, encryption, the use of insecure external code types, and validation of data input and application output.

The process of creating a more secure application goes beyond defining the need for security in the development process and looks at all the places in code where design flaws may or do exist. Ensuring new and existing code is securely developed requires processes and procedures for detecting vulnerabilities and tools to help.

Don't Chase Silver Bullets
As with any security challenge, there is no silver bullet. Some of the most commonly used approaches to application analysis are manual code reviews and penetration testing. While these are both useful, neither method alone is entirely sufficient to cope with the breadth of existing and potential design errors, and therefore cannot help ensure code is secure. Manual code review is time-consuming and expensive, and spotting flaws or potential operating errors is extremely difficult.

Penetration testing can only discern a small subset of errors an application may contain. While this method is useful for highlighting such errors, it provides an incomplete picture of overall application security.

Automated tools can make the entire process more manageable. The best tools can pinpoint vulnerabilities at the precise line of code and provide detailed information about the type of flaw, the risk it poses and how to fix it.

While all of the technical terms discussed in this article may not be well known to information defenders, expert hackers are aware of each potential path to data. Some CIOs may balk at understanding the minutiae of application defense, but they should note that the enemy, in this case, is interested in the tiniest chink in an organization's armor. The wake-up call has been loud and clear.

Public-sector CIOs must be sure their agencies have a secure software acquisition life cycle, whether in development and maintenance, or in software security certification and accreditation. Only in this way can the nation truly rely on the security and integrity of their critical data and operations.


Jack Danahy Contributing Writer
Jack Danahy is founder and chief technology officer of Ounce Labs and is one of the industry's most prominent advocates for data privacy and application security. He is a member of the U.S. Department of Homeland Security Task Force on Security across the Software Development Lifecycle; he participated in the Department of Defense's Homeland Security Information Sharing initiative, and he leads the technical development of the Ounce 5.0 source code analysis tool.