[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [CVEPRI] CVE accuracy, consistency, stability, and timeliness

Some of what Steven wrote is quoted here:

>As you've seen in the CD's proposed so far, the default action has
>generally been to MERGE two issues when there's incomplete
>information.  But several people have expressed a preference to keep
>the issues SPLIT if there's no good information available otherwise.
>I agree with David LeBlanc - I think we'll pay a price regardless of
>which default action we choose.  My initial thinking is that a default
>SPLIT action would make the CVE maintenance job a lot easier - but we
>have to consider the impact on the users of CVE.

 From the point of view of the two CVE-using CERIAS products that I am
working on, it is a fundamental requirement that the CVE not "invent"
relationships.  If we merge by default entries that shouldn't have
been merged, we imply a relationship that doesn't exist, we mislead
and the CVE is wrong -- a cardinal sin for scientific research.  If
we split by default, the CVE is just suboptimal, and from my point of
view, it can stay that way.  Sub-optimality is much more tolerable
for me than subsequent merges and changes to the CVE.  A footnote
saying that a merge would have been possible between two CVE entries
would be enough.

>The fundamental question is: how much effort should be put into making
>sure that CVE entries are accurate and stable, and can we live with
>the extended review process that it would entail (in other words,
>business as usual)?  Or are we willing to accept some inaccuracy and
>additional mapping maintenance in order to allow CVE to remain
>relatively timely?

Accuracy as in the CVE modeling the vulnerabilities with a consistent
level of abstraction is not the kind of accuracy that I need;  a
better word for what I need would be correctness, and that can be
attained without a model.  Correctness ranks 10/10; consistency in
level of abstraction and optimal data compression ranks a 1/10.  In
my mind, it is possible to be perfectly correct and stable with a
light review process.  I believe that making an 'accurate' model of
vulnerabilities is beyond the mandate of the CVE.

As for error rates, it is hard to give a number because there is no
alternative to the CVE.  What is the error rate that we tolerate in
dictionaries?  I am much more tolerant of missing entries than of
incorrect information.  I would be cautious of any use I made of the
CVE if I knew that it contained 5% of incorrect entries, and probably
stop using it if there were more.

"You cannot build a happy private life in a corrupt society anymore
than you can build a house in a muddy ditch."
Anonymous Czech woman, from the 2000 Commencement Address by Bill
Moyers about the american political system

Page Last Updated: May 22, 2007