Protect Your Assets
Showing results for 
Search instead for 
Do you mean 

Why research is fundamental to security

stuarthatto ‎08-27-2013 11:28 AM - edited ‎09-16-2015 04:46 PM

Every time I have presented HP TippingPoint solutions in the last seven, almost eight years, I have always included at least two slides about security research, and in particular, vulnerability research.  It always seemed to be well accepted that a security vendor should ‘do’ research and so these slides were moved quickly over with the requisite ‘oohs’ and ‘aahs’...until recently, that is.


I can’t say whether the audiences don’t believe research is relevant anymore, or that perhaps the new thinking is that security is all about dashboards and compliance targets, and that all security solutions are simply commodities that make the dashboard go green.  One thing I can say with absolute certainty is that without security research, those dashboards would look like the water in Sharknado


So why research?


To answer that, we probably need to look at the difference between an exploit-centric approach and a vulnerability-centric approach to provide detection or protection.


Security via exploit detection can, by implication, never be giving zero-day protection, or indeed +1 or +100-day protection.  This methodology needs an exploit in the wild so that a signature can be formed that identifies the malware.  Often times, this signature is very narrow and causes false positives where legitimate traffic is blocked or alerted upon.  Exploit detection can be bolstered to a degree with heuristics or ‘sandboxing’ techniques but these can impose a large penalty in processing time; which creates a knock-on issue of poor network or application performance.


The vulnerability protection approach takes a 180-degree opposite view.  If we know what the vulnerability in an application, operating system or utility software is, then it is possible to create a complex filter or signature that closes that vulnerability before an exploit enters the wild.  This approach provides true zero-day protection, in fact in many cases minus-day protection.


Reviewing the concept


I know this can be a difficult concept to grasp. To illustrate the difference, let’s go back to a very old worm by which many thousands of enterprises were infected.  I choose this worm because it is well understood, as is the vulnerability it exploits.  I know it’s very old, but it has relevance in this example.  Also it’s easy to describe in a blog!


In 2003, a worm given the name “Blaster” or “Nachi” exploited a buffer overflow vulnerability in Microsoft’s proprietary implementation of the DCE-RPC network protocol.  At the time, the DCE-RPC protocol was fairly well understood, but its internals remained proprietary.  So here is where we hit the first problem.  Undocumented protocols cannot be well covered by protocol decoders in security devices, and  because of this other methods of detection are needed to compliment or replace the decoder.


The exploit-centred approach must first wait for an exploit of this vulnerability to be in the wild before a signature can be created, as I said above.  And because Microsoft eventually patched the bug, designated as MS03-26, the patch code could be reverse engineered by the hacking community and an exploit created. Of course this was done many weeks ahead of when IT organisations could implement the emergency patch.


An easy way to write a signature is to use a distinctive string from an exploit’s shellcode as a pattern match.  For example, the following hex string was found in HD Moore’s exploit and the Blaster worm, which can be used as a signature to detect both of those attacks.  The string contains machine instructions that are passed directly to the victim processor once the overflow is successful.


EB 19 5E 31 C9 81 E9 89 FF FF FF 81 36 80 BF 32 94 81 EE FC FF FF FF

E2 F2 EB 05 E8 E2 FF FF FF 03 53 06 1F 74 57 75 95 80 BF BB 92 7F 89

5A 1A CE B1 DE 7C E1 BE 32


The advantages of such a signature are:

  • It is easy to create quickly
  • It places a light load on most detection engines
  • It should not false positive on non-attack traffic 

The disadvantage is that the filter is specific to a particular exploit, or handful of exploits, that use the same shellcode.  Hence, the signature has a terrible false-negative problem.  If a different piece of exploit code is used for the same vulnerability, the filter will be blind to the attack.  In fact, the hex string above would not match on the Nachi worm. Clearly, a much better signature would be one that works more like a human, who would do the following:


1.  Watch the TCP session setup

2.  Watch for a BIND to a vulnerable RPC interface

3.  Look for a REQUEST for the appropriate function call

4.  Navigate to the vulnerable parameter in the argument list

5.  Notice that an overlong server name has been provided (is the server name longer than 32bytes?)


These events represent the “necessary conditions” that must be met for any attack to succeed.  By checking for this specific sequence of events, it is possible to implement a zero false negative signature.  Furthermore, the fifth item on the list of criteria, guarantees that the filter is free of false positives. 


This is true because a server name longer than 32-bytes will never be used in the course of normal communications.  If the IPS can detect condition number five very precisely, just as outlined above, the device will have no false positives and no false negatives.  A filter that implements this type of logic is called a vulnerability filter.


You’ll notice here that this approach doesn’t look for the exploit; it looks for the conditions that would be needed for an exploit to trigger the vulnerability.  This has a very real advantage over exploit signatures—vulnerability conditions remain constant while exploits can mutate.  If an exploit mutates, the exploit signature will no longer trigger. However the vulnerability filter will still see the same conditions being exploited and will trigger.  Think of this as a virtual patch, one that buys organisations the time to implement official vendor patches.


Clearly exploit signatures are also needed because no security organisation can research EVERY vulnerability in EVERY piece of software.  Exploit signatures can therefore offer a fast way to cover an outbreak while the vulnerability is researched.  However, when you consider that HP TippingPoint, DVLabs and the Zero Day Initiative have been credited with over 20 percent of all Microsoft vulnerability discoveries in the last seven years, more than eight-times our closest competitor, you do have to wonder whether the signature approach has ever really provided true zero-day protection. 



Source: Compiled from public data available at


Click here to learn more about security to protect against today’s advanced cyber threats

0 Kudos
About the Author


EMEA Product Manager, TippingPoint

Nov 29 - Dec 1
Discover 2016 London
Learn how to thrive in a world of digital transformation at our biggest event of the year, Discover 2016 London, November 29 - December 1.
Read more
Each Month in 2016
Software Expert Days - 2016
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all