[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Level of Abstraction Issue: Similar Applications, "Same" Vulnerability




Russ wrote:

> <SNIP>
>
> If we take the existing definition;
>
> CVE-00344
> NT users can gain debug-level access on a system process using the
> Sechole exploit.
>
> and apply the "same attack" approach to it, then anything that can allow
> an NT user to gain debug-level access on a system process would be
> considered the same as SecHole. Doesn't matter how they got it, where
> they got it, or what process they got it from.

That was not my intent!  It matters very much (I'd say it is critical) how
they got it, i.e. what was done do make this happen.  (Although much less
important is which exploit caused the actions.  There could be several
independent exploits, but if they do the same things and get the same result
_then_  it is the same vulnerability.)

> If we don't do some level of scrutiny, either on the codebase for the
> attack, or the codebase of the vulnerable system, this is all we're left
> with.

Right.  I'd say we need to scrutinize the actions taken to exploit a
vulnerability.  Where we have the exploit, of course lets examine it.  And
what are we looking for?  I'd say the answer to the question "What does it
do?"   Where we do not have the exploit (more of a problem in network
attacks), then we use whatever we have (like network traffic dumps or
whatever) to determine  the actions which form the exploit.

> Unless of course you decide we're only going to enumerate
> network-based attacks and we're using the packet signature to define the
> attack (for the purposes of comparing to other attacks). That would be,
> however, a Common Attack Enumeration, not vulnerability enumeration.
>

Granted, I have a network centric viewpoint, but I'm trying to be
inclusive.  We want it all.

>
> Even then, we should not preclude the observation of available details
> simply because they may not be widely available. The evolution of the
> enumeration of species has been based on observable details, and as more
> details became available, definitions were revised accordingly
> (sometimes correctly, sometimes incorrectly).
>
> While we may not be at the stage of Socrates, we're nowhere near DNA
> testing or even the Dewie Decimal System.
>
> If you want something that's simple, has fewer entries, and is less
> subject to criticism, I'll shut up.
>

That is not my intent.

>
> If instead you want to actually enumerate unique vulnerabilities, then
> to make the decision in the beginning to preclude some verifiable data
> (not conjecture or opinion) from the determination process is simply
> flawed. Its just as flawed to presume there is a network signature to
> see.
>
> Out of curiosity, what does the group see as being an "average"
> Candidate announcement? I can see us having to "guess" within the first
> few days, but after that (and likely before we reach CVE status),
> details of "most" vulnerabilities are going to include codebase details.
>

I'd agree.  I'm not suggesting we preclude any useful information.  What I
am suggesting is that we not attempt to use the codebase of the
application/operating system as a differentiator between vulnerabilities.  A
simple example would be if I can send a packet which crashes IOS, NT,
Solaris, and the Mac, to me that is one vulnerability.

begin:vcard 
n:Hill;William
tel;work:703-883-6416
x-mozilla-html:TRUE
org:The MITRE Corporation
adr:;;1820 Dolley Madison Blvd;McLean;VA;22102;
version:2.1
email;internet:bill@mitre.org
title:INFOSEC Engineer
fn:Bill Hill
end:vcard

Page Last Updated or Reviewed: May 22, 2007