Grounded in the Cloud
cancel
Showing results for 
Search instead for 
Did you mean: 

Re: Will the ‘real’ IT security researcher please stand up?

Wodisch

Hello everybody,

 

trying to answer some of the points of the original post and some of the other responses.

 

First, as others already wrote, the analogy used does not fit.

To bring a (hopefully) better one: it seems like the work of the Watergate journalists is still appreciated (expect by the Nixons) - but nobody expected them to "fix democracy".

All they did was shouting "fire", but that was what was needed!

So IMHO this is still necessary (to what degree is another question entirely).

 

One of the reasons (as I see them after over 25 years of IT, including development, troubleshooting, administration, and teaching) is the very simple - and very stupid! - reason that all the student books, trainings, and such, still do not care about the need to learn programming in a stable/defensive/reliable way!

Take the original "hello world" (or any of today's newer versions):

if does not care about the return value of "printf()", it is not even mentioned that there is a return value, and nobody seems to wonder what that might be good for (AFAIK).

This might go back to the principle of "partial correctness" where the theory of operation/programming only cares about "valid input", but never about anything else (leading to whole industry delivering "fuzzying tools", and countless hacks).

 

Going on from that point of view I do agree that there is a lot lacking in today's "research": we do need

- research about "why does nobody care"

- research on "how to teach it better"

- research on "ROI on doing it right"

- research on "how to communicate security issues to the developers"

 

To use just another example: for hundreds of years people have learned "how to use a sword" with blunt weapons first, and with wearing protective armor - nobody would try to teach youngsters using razor-sharp Japanese katanas wearing only t-shirts in the very first lesson.

But that is what we do in IT:

- "C" is NOT the candidate for the "first programming language" and it never was!

- learning to configure tools like "HP Operations Manager" or "HP BSM" on-the-job when installing a productive system was never a good idea!

- designing a distributed system by "letting it grow" (think about SCADA systems) has proven to be disastrous!

 

Maybe we need even bigger disasters, before we can start with a safer mindset?

 

But can we blame all that to just that small group called "security researchers" (them being the bad guys, then)?

 

Why don't we read about the people making the (very bad) decisions to NOT include any kind of security into the design, the development, the roll-out, and/or the maintenance of these systems?

Certainly, we all have read about people like @0xcharlie or @taviso being blamed for their research, but what are the names of the people having caused the bad designs/systems/implementations which allowed them to find something?

 

Regards,

Wodisch

0 Kudos
About the Author

Wodisch

helping people to know/understand/implement/run/troubleshoot/tune HPE-Software in over 30 countries, for over 30 years

Comments
Wh1t3Rabbit

  Well now, that certainly is a stance against much of the established security community which prides itself on the discovery and exploitation of 0-day security bugs.  An interesting perspective, but I have to tell you I do agree with a significant amount of what you're saying - all break and no fix makes for a very poor culture of destruction.

 

  I'm not sure that information security takes themselves as seriously as other fields of legitimate research, so that many of the rules simply don't appear to apply as you've stated.  This actually has been something I've been writing and speaking about for a long, long time on my blog (hp.com/go/white-rabbit).  If we're just a bunch of 'breakers' we're really not solving any problems as you've pointed out.  Security researchers are infamous (this is not a good thing) for "dumping off a vulnerability" on the doorstep of a company or organization, then threatening to expose them for having it in their code or architecture.  Yes, it's the company's responsibility to remediate or ensure it doesn't happen again, and yes many organization simply act as if they don't care and drag their knuckles ... but this falls back to the security research community.

 

  Go to any conference, security conference that is, and look around.  The talks that get the big crowds are the "How I hacked ... and you can too", and the ones where people are offering real solutions to serious problems are sparsely attended ...why is that?  It's part human nature - we all can't turn away from a train wreck -and part need to be in the spotlight and 'cool' I guess.  Or maybe we just need to demonstrate our mental superiority?  If that's the case I suggest we do that the way your labs are doing it - but solving problems that plague organizations globally.  Solving real security issues, on massive scales is where the real security research should more keenly focus today.  Just my $0.02 ...

 

/Wh1t3 Rabbit.

Adrian Sanabria

The "Fire" analogy doesn't really work. The "breakers" are not at all analogous to someone shouting fire. That is the staff you (hopefully) have watching for alerts and incidents. The reason is that, when the security researcher finds an issue, there is no fire yet, because they are not the bad guys.

 

If we are to use the "Fire" analogy, the security researcher would be a building or fire inspector. They point out and say, "hey, this is an issue, and it could start a fire". We don't expect the fire inspector to help fix the issue beyond making a few suggestions. Similarly, why is it that we expect the "fixer" to be a security specialist and not an IT generalist?

 

In my opinion, the "security fixer" skillset instead should belong with our everyday admins, developers and engineers. They know the environment, and they'll know what fix fits best without breaking/compromising productivity, usability and budgets.

RyanKo

Hi Wh1t3Rabbit and Adrian, 

 

Thanks for your comments and thanks for your tweeting/ sharing.

 

I am glad the post gained some traction and made readers think about the issue, and perhaps made them take a stand like you did. Honestly, I struggled a while to post this draft up as it may become divisive, but I thought I do it, as all I see in the news nowadays are what you would call breakers. While they are undoubtedly important, I have a real concern about the future of the internet for our children's generation. We need more fixers and I hope that we can encourage more with that mindset. 

 

At the same time, there is another important area - secure software engineering. I believe that there is a lot of room in schools and higher learning institutions to teach security-grounded software engineering, and at least inculcate the culture of secure application development into the minds of fledgling software engineers. 

 

I welcome more comments and perspectives, and am looking forward to them. :)

 

 

RyanKo

Nadhan

Ryan, Wh1t3Rabbit and Adrian,

 

Notwithstanding the excellent work being by HP Labs, the onus still lies today on the consumer of the Cloud services to ensure the overall security of the solution as I outline in my post here.   Also, the hype and perceived financial benefits of the Cloud is likely to eclipse security concerns about security.  Enterprises deploying solutions in the Cloud must understand and appreciate the disastrous consequences of security being compromised.  I have outlined some analogies to reinforce this point

Argo Pollis

This was a very interest post and its subject is spot on in todays environment.   I agree with much of what you've said but I want to go a step further.  I think computer security research seems (at least to me) to have come to a dead end.  I think the handwriting was "on the wall" as far back as the 1980s when researchers began trying to fix security problems by break and patch methods, until they realized that it was a never ending chase. Applying abstraction to software allowed us to see how to structure and write good code and we could even sometimes prove its security properties too.  But, flaws in coding and logic always seemed to creep back in.  In the late 1980's there was also great hope of beating viruses.  I had occasion to recently talk to an exec at a large AV company who admits "they" (the virus creators) have won.  I could go down the list of exciting technologies (firewalls, applying strict typing, trusted systems, PKI, etc) that got its starts back then and have made no real difference.  As a researcher and designer I am still faced with roughly the same kind of security challenges.  Electronic commerce on the Internet of today and tommorrow is at best statistically "safe" (it is never secure) and telling customers otherwise is just not honest.   As researchers we ought not kid ourselves over the nature of what we do.

Sheikh Habib

Ryan, excellent and timely post I would say. I have the same observations for several years and finally got someone who really stand up to ask the question in the open podium. Keep it up!!!

RyanKo

Hi Argo and Sheik,

 

Thanks for your comments. Argo, you are right. According to a Sophos report, a new (unique) malware is created every 1/2 second in the world. The current way we are approaching this is definitely a platform that we cannot win in.

 

We need to find a new way to protect our precious IT resources. Lets continue to strive hard and keep working on this cause. Do feel free to retweet or post this blog and invite your friends to discuss their thoughts. I welcome a lively open discussion and I hope this gets the security community to a better awareness of the problem that may grow to an uncontrollable scale.

 

Ryan  

Wodisch

Hello everybody,

 

trying to answer some of the points of the original post and some of the other responses.

 

First, as others already wrote, the analogy used does not fit.

To bring a (hopefully) better one: it seems like the work of the Watergate journalists is still appreciated (expect by the Nixons) - but nobody expected them to "fix democracy".

All they did was shouting "fire", but that was what was needed!

So IMHO this is still necessary (to what degree is another question entirely).

 

One of the reasons (as I see them after over 25 years of IT, including development, troubleshooting, administration, and teaching) is the very simple - and very stupid! - reason that all the student books, trainings, and such, still do not care about the need to learn programming in a stable/defensive/reliable way!

Take the original "hello world" (or any of today's newer versions):

if does not care about the return value of "printf()", it is not even mentioned that there is a return value, and nobody seems to wonder what that might be good for (AFAIK).

This might go back to the principle of "partial correctness" where the theory of operation/programming only cares about "valid input", but never about anything else (leading to whole industry delivering "fuzzying tools", and countless hacks).

 

Going on from that point of view I do agree that there is a lot lacking in today's "research": we do need

- research about "why does nobody care"

- research on "how to teach it better"

- research on "ROI on doing it right"

- research on "how to communicate security issues to the developers"

 

To use just another example: for hundreds of years people have learned "how to use a sword" with blunt weapons first, and with wearing protective armor - nobody would try to teach youngsters using razor-sharp Japanese katanas wearing only t-shirts in the very first lesson.

But that is what we do in IT:

- "C" is NOT the candidate for the "first programming language" and it never was!

- learning to configure tools like "HP Operations Manager" or "HP BSM" on-the-job when installing a productive system was never a good idea!

- designing a distributed system by "letting it grow" (think about SCADA systems) has proven to be disastrous!

 

Maybe we need even bigger disasters, before we can start with a safer mindset?

 

But can we blame all that to just that small group called "security researchers" (them being the bad guys, then)?

 

Why don't we read about the people making the (very bad) decisions to NOT include any kind of security into the design, the development, the roll-out, and/or the maintenance of these systems?

Certainly, we all have read about people like @0xcharlie or @taviso being blamed for their research, but what are the names of the people having caused the bad designs/systems/implementations which allowed them to find something?

 

Regards,

Wodisch

Events
June 6 - 8, 2017
Las Vegas, Nevada
Discover 2017 Las Vegas
Join us for HPE Discover 2017 in Las Vegas. The event will be held at the Venetian | Palazzo from June 6-8, 2017.
Read more
Each Month in 2017
Online
Software Expert Days - 2017
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all