[CVEPRI] Adding Confidence Levels to CVE
This is a long email with a bit of background. The short story is, I
propose that for each CVE candidate or entry, we record a level of
confidence that the CVE item is real. This approach would solve a
number of issues that impact the CVE Initiative.
As many of you know from Editorial Board discussions in the past few
months, different Board members want different things from CVE in
terms of its quality and accuracy (recall some discussions on the
Board list during this summer, as well as various Board meetings).
One camp wants absolute proof that a security problem is real before
making it an official CVE entry. Another camp wants us to quickly
agree on a name and "fix" things later if a vulnerability report is
found out to be wrong. This directly impacts the timeliness and
accuracy of CVE, as well as what we do at MITRE. It will
inconvenience one camp as much as it will satisfy another.
In addition, users of vulnerability information sources have a lot of
confusion or bad assumptions about the quality of that information.
They may incorrectly assume that the source has validated every item
that they produce. For example, many people assume that if they see
an item reported in a database, that means that the database owners
have verified that item. I had this assumption until I started
talking directly to the information sources. In addition, many items
don't have clear vendor acknowledgement of the problem. As I reported
previously, 50% of all candidates don't have acknowledgement. A more
detailed study of over 100 non-acknowledged candidates indicated that
only about 10% of them had any acknowledgement by the vendor, if you
spent a lot of time looking for it. But in some cases, those
candidates were reported by reliable sources, but didn't have any
Since there is a community-based review process for CVE, it is highly
likely that CVE consumers also assume that each entry has been fully
validated. However, that is not necessarily the case, as the only
requirement for inclusion in CVE is that the item get enough ACCEPT
votes. (Of course there's also avoiding duplication, making sure the
item passes the vulnerability or exposure definition, and ensuring
that there are no content decisions that state that the item shouldn't
be in CVE). I would not be surprised if some CVE entries are not real
but "become real" by virtue of showing up in a large number of tools
and databases. CAN-1999-0205 is a good example of such an alleged
vulnerability that's in a lot of products but might not be real. (See
Some changes have been made to the voting process so that voters could
record a basic notion of confidence that a security issue is real.
See the following for a refresher:
CVE-BOARD:20000919 [CVEPRI] Important changes to CVE candidates and voting
However, the current approach is still a little "clunky" for voters,
whether they're using the CVE web site or voting in email messages.
Also, the voters' reasons for acceptance aren't being publicly
recorded until we can resolve the concerns that some members have
about revealing too much information about the amount of research that
they've done for their own products.
I propose that we extend CVE to include a confidence level. This
could be a voter-determined level of confidence that a particular CVE
Recording the confidence level for a CVE entry (or candidate) would
have several benefits that are directly related to CVE:
1) It could could help to satisfy both the "fast-and-noisy" CVE camps
and the "slow-and-validated" camps, whose preferences are mutually
exclusive. Regardless of which approach is chosen, it would
negatively impact how one of those camps uses CVE. If CVE had
confidence levels, then slow-and-validated advocates could use the
highest confidence levels to extract only the portions of CVE that are
"proven correct," and the fast-and-noisy advocates would be able to
see how much noise they may be dealing with.
2) It could make the voting and review process much more efficient.
Voters could more easily vote to ACCEPT a candidate even if they
haven't replicated it themselves; they could just record a lower
confidence level, check that the candidate reflects the initial report
accurately, and vote to ACCEPT. I've noticed that the number of
NOOP's has increased significantly since we've updated the voting
guidelines, which in turn is delaying the whole process even further.
3) Confidence levels would make it clear to CVE users about the level
of noise that's being dealt with in CVE, and it would reduce the
number of incorrect assumptions that are out there.
4) CVE diligence levels, as currently written, are difficult to
describe easily. This is becoming more important as more unknown
people ask MITRE for candidates to include in their initial public
announcement. I believe that diligence levels could be more naturally
described in terms of the level of confidence of items that the
candidate requester has publicized in the past.
5) It could make it easier for me as the CVE Editor to decide when to
ACCEPT candidates. If a candidate has high confidence, then I could
ACCEPT it more readily in the minimum review period; if it has low
confidence, then it might be reasonable to delay the item a little
There are some community-wide benefits (independent of CVE) for
confidence levels that might consider:
1) It provides consumers of vulnerability information with a tool to
reduce information overload. Several participants in the recent
eWeek/Guardent vulnerability summit expressed a need for knowing which
vulnerability reports could be "trusted."
2) It could open a dialog among security professionals about how they
determine their own confidence in specific vulnerability reports.
(For example, it would be interesting to know why security vendor X
has high confidence in something while vendor Y has low confidence).
3) Confidence levels could become the basis of a "web of trust" (or
"web of confidence") that allows individuals to use third party
confidence ratings to filter information, without having to go through
the labor-intensive effort of verifying every report themselves. In
addition, confidence levels could be used to "evaluate" different
people/organizations that report vulnerabilities and establish a
simple notion of peer review.
If the Board agrees that confidence levels could benefit CVE and the
community as a whole, then we would need to decide how to disseminate
confidence information. For example, should CVE itself be extended to
include a confidence level? Or should it be "physically" separated
from the official CVE and provided as an alternate resource on the CVE
web site, a la reference maps and CVE version difference reports? And
how would confidence be recorded as part of the voting process?
My next email will have specific suggestions for confidence levels
that could be used, within CVE or by others in the community.