[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PROPOSAL: Cluster 20 - DESIGN (27 candidates)



Gene Spafford proposed:

>Any software that functions according to its specification, and whose
>correct functioning is within the bounds of a common security policy
>(but not necessarily *every* policy) will NOT be considered a
>vulnerability for inclusion in the CVE."
>
>Thus, the finger program would not be a vulnerability so long as all
>of its functions are correct and known.   We might allow its use in
>an academic environment, so it is not a vulnerability.


When we were originally struggling with this issue within MITRE, we
had a number of debates regarding whether finger and the like were
"vulnerabilities."  There was strong agreement among most MITRE
personnel (more than 10) that they *should* be considered
"vulnerabilities."  Since those personnel were often involved in doing
security work for MITRE sponsors (i.e. various government
organizations), it's reasonable to say that their perspectives reflect
a perspective of "vulnerability" that's common in the government.
(Steve Northcutt, any comments?  Vendors with government customers?)

At one point I had advocated something along the lines of what Spaf
described, but I became convinced that it was too narrow a
perspective.  While that approach is theoretically pure, it has some
practical complications in terms of fostering data sharing and having
the CVE be applicable to a wide variety of perspectives.

So, we adopted a more liberal perspective (informally, I've called it
a "kitchen sink" approach), which could be described as:

"If the state of the computing system is in violation of some valid
security policy that is commonly used, then it is a CVE
vulnerability."

Then there comes the question of what is a "valid" security policy,
and what do we mean by "commonly used?"  Certainly, most security
policies imply that "nobody should be able to get administrator
privileges unless they are given the appropriate password."  Running
finger and other services is not universally regarded as a
vulnerability, but it is often considered a "vulnerability" in some
environments (e.g. DMZ's that are exposed to the Internet, or
environments in which access is tightly controlled, e.g. classified
networks or human resource/finance departments.)

So the approach was, if something is a vulnerability according to some
*reasonable* security policy that is commonly used, then it should go
into the CVE.  The question then becomes, what types of security
policies should be represented here?  I came up with a notion of a
"Universal" policy (e.g. nobody should get admin privileges when they
don't have the password) versus a "Conditional" policy, where a
"vulnerability" depends on the conditions in the specific enterprise.
To me, it's reasonable that the CVE should include vulnerabilities
that are in a Universal policy, or in a typical Conditional policy.
This concept needed refinement, which is why you don't see it in the
tech paper.

The result of this broad approach is that the *entire* CVE itself may
not be useful to most users; but we want to make sure that the CVE
includes various subsets of information, each of which is highly
useful to some subset of users.  I alluded to this in the "Some
Implications of Content Decisions" section of the tech paper I sent to
Board members, where I talked about sublists.

- Steve

Page Last Updated or Reviewed: May 22, 2007