[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [CVEPRI] CVE accuracy, consistency, stability, and timeliness



On Fri, 16 Jun 2000, Steven M. Christey wrote:

> Pascal Meunier asked:
> 
> >if we have to analyze the nature of a vulnerability to make a CVE
> >entry, haven't we gone too far with respect to the stated goals of the
> >CVE?
> 
> Bill Fithen added:
> 
> >We should guard against creating a situation where in-depth analysis
> >is required... just decide if a thing is one thing or two things.
> >...Pursuing perfection too early may mean the introduction of delays
> >that make the eventual acceptance of an entry less valuable merely
> >because of the delay

My comments are based on my own direct experience as well as a dozen
or so others who have been engaged in the vul analysis area for over a
decade. Much of my historical knowledge comes from anecdotes
throughout that period.

As one might imagine, we have been wrestling with most of these issues
all along. In fact, as various points in our life, we have found
dealing with these issues so organizationally crippling that we found
our basic mission in jeopardy. Most all of you have seen periods
during which CERT was less effective than others. These issues were
the central catalyst for those fluctuations in effectiveness.

> We've been doing deep analysis of CVE items at MITRE, because I have
> believed that it's important for CVE to be as consistent, accurate,
> and stable as possible.  And there have been delays as a result.  Note
> that there has been internal disagreement about this approach, and
> we've been revisiting this issue in the past few weeks.

At least three times per year, CERT gets embroiled in this debate
internally. Even when we sit in the room together saying that we know
better than to have this debate again, we do. We just cannot help it.
This issue to has so many conflicts for some many stakeholders that it
cannot be left alone for long.

In my opinion, CVE only has three ways to go on this:

1. Let one organization's "beliefs" dominate and everyone else just
   goes along with it. Even this may be problematic, because even the
   "one" organiation is unlikely to think with one mind on this and
   even if they do, they're not likely to come up with an approach
   that does not change over time.

2. Adopt a pervasive expectation that this issue is hard, will be
   dealt with and redealt with, sometimes with disturbing consequences
   (such vast reorgizations of existing representations). This clearly
   has negative aspects to it, but it has the great virtue of avoiding
   choice #3.

3. Die. This is a "killer" issue. Pascal hints at the dire
   consequences of dealing with it ineffectively when see says under
   certain (accuracy related) conditions, he will cease to use CVE. In
   all honesty, we ALL have certain criteria that will cause us to
   ignore CVE and proceed independently. Some may be "academic", some
   may be "principle-based", but most would probably be purely
   economic--Does working WITH CVE make me more money than working
   WITHOUT it?

I assert that to the extent that deep analysis is required before a
new entry can be added to the CVE (perhaps even as a candidate), that,
as David says, we no longer have a list, we have a vulnerability
database. Even if we choose not to record the results of the deep
analysis in the "list", it's there implicitly in the discussions and
response time.

If MITRE is going to engage in "deep" analysis, I suggest that it must
necessarily adopt a split personality. The CVE "management" should NOT
be influenced by MITRE's deep analysis activities any more that any
other CVE board member's similar activities. And it is clearly wrong
for a CVE candidate to be held up waiting for MITRE's internal
analysis when other board members are unaware that that is what is
going on. If it were be held up because ISS or Microsoft weren't doing
something fast enough in the eyes of the CVE management, the board as
a whole would probably all know about it (and probably even understand
and sympathize). Openness is required.

> Various members of the CVE content team have conducted some deep
> analysis to try and resolve some issues, e.g. the lpr problem that
> L0pht announced in February that's still around after its initial
> discovery several years ago.  Linux problems, and Unix problems in
> general, can also be troublesome because there are so many different
> distributions that fix the same problem at different times.

I agree that at times deeper analysis is required. But, it seems to me
that it never was the mission of the CVE team to DO that analysis.
That is what the other board members are supposed to be doing. And if
we aren't to the satisfaction of the CVE team, then we need the CVE
team to say that to us.

> This deep analysis has been a significant bottleneck with respect to
> creating candidates and distinguishing between entries.  In some
> cases, the content team may spend several hours researching a single
> issue that could be one candidate or several.  The deep analysis may
> involve poring through various information sources, patches, exploits,
> software change logs, etc. - i.e. the type of research that I assume
> people do for full-fledged vulnerability databases.  With 10,000
> legacy submissions for us to convert into candidates, I don't believe
> we have the resources to do it all if we have to perform deep analysis
> on 10% of them.  And in the end, as people have pointed out, you will
> never be completely sure of accuracy, because there's so much
> incomplete information.

That is why I say CVE is being handled more like a database than a
list right now.

> My approach has been that if the CVE list is to be a "standard," it
> should be both stable and reliable.  (I say "CVE list" to distinguish
> it from the candidates list, which we already accept to be
> unreliable.)  Maintainers of proprietary vulnerability databases
> generally have more flexibility to change their own entries.  With
> CVE, a change could have an effect on many different consumers.
> 
> So I've been careful to avoid creating candidates that might be
> duplicates of other candidates or entries, careful to be consistent
> with respect to level of abstraction, and much more careful not to add
> any candidate to the CVE list if it looks like it's a duplicate of an
> existing entry.  If we have to change the level of abstraction of CVE
> entries very often, that becomes a maintenance problem for people who
> maintain CVE compatible products - or, if the mappings aren't kept up
> to date, CVE compatibility becomes less useful to the consumers of
> those products.  Changes in CVE will also have an impact on the
> quality of any quantitative analysis that uses CVE names as a way of
> normalizing the data.

Stablity is the enemy of learning. As the CVE list grows, so does the
status quo. Eventually, the desire to achieve stability can only come
at the expense of choosing to suppress new knowledge that invalidates
past entries. To avoid this, we must be willing to change the past
when we see that it was wrong.

> This is one of the main reasons why the CD's are as detailed and
> "strict" as they are.  They attempt to make CVE as stable and
> consistent as possible, as early in the process as possible.  They
> attempt to minimize the amount of modification to existing CVE
> entries, and to minimize the amount of work for Editorial Board
> members and maintainers of CVE-compatible products.  On the other
> hand, it is very labor-intensive and results in delays.

In my opinion, CVE's quality cannot be achieve or maintained with
strict rules. The only way to achieve high, long term quality is to
make sure that the critical decision makers (splitters, mergers, and
etc.) are thoroughly knowledgeable about the technologies and the
capabilities of the organizations represented by the board members.
For their own sanity, they will undoubtedly have to adopt some
consistent (at least in their own minds) philosophy (e.g., make it
"correct" (a la Pascal the professor), make it practical (a la David
the toolmaker), ...). Trying to satisfy all potential users of CVE
will result in a schizophrenic philosophy that will drive those
decision makers crazy.

> Perhaps a portion of the deep analysis can rely more heavily on the
> expertise of Board members.  If 2 candidates look similar, they could
> be tagged to indicate that they need deeper analysis.  Anybody who has
> some good insight into the problem could provide feedback; if nobody
> has enough information, maybe we move to a default position of
> splitting or merging as appropriate.  We could, as has been suggested,
> annotate potentially related CVE entries (or candidates) and make that
> information available to the few individuals who would need it.

And MITRE's own deep analysis guys are just another one of the groups
with the knowledge the CVE team needs.

> Another way of minimizing the effects of poor information would be to
> involve the software vendors as much as possible.  This could be done
> by bringing major software vendors onto the Editorial Board, and/or in
> some consulting role; but with minor vendors, it could be an
> especially labor intensive job that could duplicate some of what
> others in the community are already doing.  And while insufficient
> vendor confirmation of security problems may be a significant problem,
> maybe CVE isn't the right place to solve this.  (Note that we are
> looking to add more software vendors to the Board, so if you have any
> recommendations, let me know.)

In a sense, this is "all" CERT does with respect to vulnerability
analysis. More important that anything else to us, is to involve the
vendors to correct their own products. Rarely can we instruct someone
on how to correct them themselves. And we try very hard not to leave
vendors out of an vul analysis. The only way they get left out is when
they opt out themselves. Other orgs on the board are doing the same
thing already. I don't think we need a CVE-specific approach to this;
we just need to use the information we already are collecting.

And if a situation arrises where no board member is working on a
potential vul in a product with the product's vendor, then just
perhaps that vul isn't worthy of even going into CVE. After all if all
of us have decided to ignore it (for our various reasons), then how
could it ever get voted into CVE anyway?

> As you've seen in the CD's proposed so far, the default action has
> generally been to MERGE two issues when there's incomplete
> information.  But several people have expressed a preference to keep
> the issues SPLIT if there's no good information available otherwise.
> I agree with David LeBlanc - I think we'll pay a price regardless of
> which default action we choose.  My initial thinking is that a default
> SPLIT action would make the CVE maintenance job a lot easier - but we
> have to consider the impact on the users of CVE.
> 
> So we need to have some feedback from people who have CVE compatible
> products, to understand the potential impact of moving away from deep
> analysis.  I estimate that a maximum of 15% of all CVE entries could
> ultimately require a change in the level of abstraction as new
> information is discovered.  Realistically, it may be more like 5%
> (because most would be corrected in the candidate stage, and/or we may
> decide to live with the "noise" in the absence of good information).
> Note that I got the 15% figure based on the percentage of candidates
> that are affected by content decisions related to abstraction, and of
> course these figures can't really be measured anyway.
> 
> So to CVE-compatible database and tool vendors, and anyone who expects
> to be conducting "CVE-based" analysis - is a 15% error rate tolerable?
> How about 10% or 5%?

What does "error" mean here? All that a tool or database member needs
from CVE is the number (after all, that's all there is supposed to
be). So long at the number continues to exist, the tools and databases
will continue to work, and CVE is "error-free". So, all that one need
do when a split or merge occurs is make sure the old numbers don't go
away--they just point to the new numbers that currently represent what
the old numbers used to represent. This appears to be strictly a
bookkeeping issue to me; it does not seem to be a content issue at
all.

Further, it is unlikely that tool and database vendors will adopt the
same level of abstraction as CVE in all cases anyway, so they are
already going to have to deal with n-to-m mappings between their
products and CVE. For exmaple, one tool checks the version of bind on
a system, finds the wrong version and concludes this indicates 17 CVE
vuls are present. Another tool uses a "exploit script" based approach
(rather than bind version) and find 8 CVE vuls in bind. The only
important point is that these two tools use the same CVE numbers to
report any vuls they found in common. Otherwise,  CVE has no role.

> Perhaps we can minimize the amount of serious modifications to CVE
> entries (e.g. SPLITS, MERGES, or deprecations) by only performing them
> a few times a year, say in each reference version, to minimize the
> impact on maintainers and users.

I would propose a continuous low level of split/merge/deletion
activity is better than a new "release". If you do new releases,
you're going to have vendors asking for you to keep old releases of
CVE available to be consistent with their product release cycle.

> The fundamental question is: how much effort should be put into making
> sure that CVE entries are accurate and stable, and can we live with
> the extended review process that it would entail (in other words,
> business as usual)?  Or are we willing to accept some inaccuracy and
> additional mapping maintenance in order to allow CVE to remain
> relatively timely?

In a sense, we must accept "some inaccuracy" because we can only know
what is correct by looking back from the future. The question in my
mind is merely one of how do we represent history in CVE. There will
be splits/merges/deletion and possibly more extensive revisions. The
rate at which they occur may not be relevant.

Bill

Page Last Updated or Reviewed: May 22, 2007