« PCI Is Meaningless, But We Still Need It | Main | Wired.com Image Viewer Hacked to Create Phony Steve Jobs Health Story »

Security metrics on flaws detected during architectural review?

I recently attended a private event where there was a talk on security metrics. Security metrics can be used to determine if action x is reducing risk y. Software security metrics typically involve counting the number of defects discovered over time to see if things are getting better. Most of these metrics involve issues discovered during the testing (qa/development) or post production (pen test) phases and are considered to be an industry accepted measurement. Typically these security 'defects' are placed into some sort of defect tracking system that can be pulled up at a later time.

Anyone who has been involved in application requirements/design knows that security related issues pop up and are remediated before a single line of code is written. Flawed business requirements/designs are typically not filed into a defect tracking system and are instead merely updated in the design/requirements document. I got into an interesting discussion about this and the lack of security metrics involving business/architectural related flaws discovered/remediated prior to development. I asked a simple question 'Are people measuring architectural flaws, and if so how?'. Only 1 person (out of 15 or so highly qualified individuals/companies) was doing this sort of tracking, albeit in an adhoc manner. The other participants weren't aware of a document/format available to do just this either. We did all agree this was an important missing measurement that wasn't well explored in the infosec community.

You may be wondering 'why measure architectural/business requirement flaws?'. Consider 10 different product teams wishing to transfer x data across an internet pipe. 3 teams wish to use FTP, 1 team to use an HTTP service over SSL, 4 teams wish to use SSH, and 2 teams have no clue how to accomplish this goal. In this situation there is a clear common task needing to be performed by different groups, with no standard or formal guidance available to address how it should be handled. By measuring these 'issues/requirement changes' you can identify missing standards, training opportunities, the success rate of secure design review, and frequency of certain bad designs or group assumptions.

Is anyone aware of any article/documentation/metrics pertaining to recording of flaws identified/addressed during the design/requirements phase? Please reply to the comments form below

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.


All Comments are Moderated and will be delayed!



I'm aware of a number of organizations, including my own, that measure architectural flaws through standards gap analysis. It's not a perfect measurement, either, but it does start to capture some useful data about this metric.


Robert,
I don't have a resource for the question... but I can't tell you how happy I am to see a security professional (read: expert) say the word "defect" instead of "vulnerability"... thank you.

/Raf


@Rafal

Yeah, many security professionals are from the side of a pen tester or a consultant. They come in for a short time and find the issues and leave. I don't understand the disconnect honestly with the industry on this.


We actually encourage people to file bugs during the SDL threat modeling process at Microsoft. Our tool (http://msdn.microsoft.com/en-us/security/dd206731.aspx) includes integration with bug tracking systems to make issue tracking easier. We don't do extensive centralized analysis of those bugs at the company-wide level, but product teams do look for commonalities.

Post a comment







Remember personal info?