[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problematic assignments for subpar reports via CVE request form



As a quick follow-up to this discussion, I noticed today that Lin Wang seemingly deleted all his reports on 2017/10/31:
https://github.com/wlinzi/security_advisories/commit/d571e1c1e28294833a96a070b494a7eeeb6fd516

Unless I overlooked something they're all blank e.g:
https://github.com/wlinzi/security_advisories/tree/master/CVE-2017-14302

Naturally, we cannot guarantee or expect for vulnerability reports to exist indefinitely, but it is curious when deleted shortly after publication from the sole source they were available at. Unless this was some mistake, looking at his Github repository it seems like he for unknown reasons decided to clean out the whole repository to prevent reading the old reports and just put up blank dummy entries instead. Perhaps he suddenly questioned the quality / validity of his reports.

I think this adds to why assigning CVEs for any of his reports going forward should be a concern. Many of them were questionable to begin with and have now also been deleted, which is a problem since they were the only sources referenced in the assigned CVEs. We now have 330 CVEs from 2017 with dead references from a single unreliable vulnerability discoverer. Unless he plans to republish the information, this makes 3rd party vetting even harder. If CVEs are assigned to any of his future reports, we should at least consider capturing public copies of these elsewhere.

/Carsten


On Fri, Nov 3, 2017 at 5:06 PM, Coffin, Chris <ccoffin@mitre.org> wrote:

Carsten,

 

Below are some answers and thoughts on your numbered questions.

 

  • 1) Based on the above, it appears that MITRE already has some process for challenging vulnerability reporters. Could you please elaborate on the current process?

The CVE analysts making CVE ID assignments on behalf of the MITRE CNA have many years of experience in vulnerability analysis and use this coupled with the current CVE counting rules to determine whether a CVE ID should be assigned. The CVE analysts in this role are able to detect hundreds of different scenarios in which a submission does not "provide a demonstrated negative impact for the bug" (CNT2.2A). A simple step-by-step checklist does not exist for this process.

 

  • 2) What made MITRE decide to challenge Lin Wang to provide justification in certain cases? Before I brought this issue up, there was nothing to suggest MITRE was skeptical of assigning CVEs to Lin Wang; even the very first response from MITRE made no mention of it. 

The analyst wanted to know whether a crash was a negative impact for the use cases of a product, and also wanted to know whether specific file types could be presented to a product in a way that crossed a privilege boundary.

 

For example, we would not assign a CVE ID for a "benign crash" in a command-line program that is intended to read an image file, do something with it, and then stop execution. If a product is not limited to that type of command-line usage, and is instead a long-running desktop application, then it is possible that all crashes (for any reasons) have either a negative availability impact or a negative confidentiality impact (or both). It is also possible that a different long-running desktop application, if it has sufficiently limited use cases, would never be characterized as vulnerable on the basis of a "benign crash."

 

Lin Wang also sent us the reports about DLL files that became CVE-2017-15790 through CVE-2017-15803. We initially challenged all of them because of insufficient information about the interaction between the base product and the DLL file. For example, it is not a vulnerability if the threat model is an attacker who already has the ability to choose what code exists in a DLL file, and then cause the product to execute code from that DLL file. Lin Wang told us that the threat model for DLL files was the same as the threat model for any type of image file, because the product is only trying to render an icon picture contained within the DLL file.

 

  • 3) Which specific CVEs did MITRE challenge Lin Wang on? It is good to hear it was done, but we consider it important to understand for which CVEs additional vetting was performed by MITRE.

See the prior answer.

 

  • 4) What specific details did Lin Wang provide that was considered justification for the CVE assignments? I presume PoCs from the wording. If so, was the fact that PoCs were provided all that was required or to which extent were they tested and crashes analyzed? Did you just confirm a crash and based on that considered the reports legit with more severe impacts plausible? Or was time spent determining that these could indeed have a more severe impact as speculated (randomly guessed) by the vulnerability reporter?

A PoC was provided privately in each and every case. It was confirmed in each case that the CVE was not identical to the PoC for any other CVE.

 

 

Chris

 

From: Carsten Eiram [mailto:che@riskbasedsecurity.com]
Sent: Tuesday, October 31, 2017 12:40 AM
To: Coffin, Chris <ccoffin@mitre.org>
Cc: Art Manion <amanion@cert.org>; cve-editorial-board-list <cve-editorial-board-list@lists.mitre.org>
Subject: Re: Problematic assignments for subpar reports via CVE request form

 

On Thu, Oct 26, 2017 at 3:36 PM, Coffin, Chris <ccoffin@mitre.org> wrote:

Carsten,

 

In answer specifically to the CVE IDs assigned by the MITRE CNA for Lin Wang, the rationale for a CVE ID assignment often depends on non-public information. It could be, but isn't always, a non-public PoC. There have also been cases where we have requested that Lin provide justification for his vulnerabilities. In each case he has provided the justification required. Whether or not Lin makes this additional information public would be up to him. Maybe we can nudge Lin to provide more of this information (including PoCs) in the future once the vulnerability has been fixed by the vendor.

 

 

I am aware that MITRE sometimes receives additional details in private, which is good. However, I am also confident based on our analysis that this particular vulnerability reporter would not have been able to provide MITRE with any additional details or PoCs for many of these issues that would justify the CVE assignments. This is evident from the crash reports and extra analysis we conducted for VulnDB.

 

Discussing Lin Wang has served its purpose to bring attention to the bigger, underlying problem. I won’t spent much more time debating him specifically. We have evidence that the CVE assignments tied to many of his reports are invalid, while MITRE seems to overall consider them fine. There is little reason to discuss that any further.

 

Instead, I'd appreciate feedback on the following questions related to this:

 

1) Based on the above, it appears that MITRE already has some process for challenging vulnerability reporters. Could you please elaborate on the current process?

 

2) What made MITRE decide to challenge Lin Wang to provide justification in certain cases? Before I brought this issue up, there was nothing to suggest MITRE was skeptical of assigning CVEs to Lin Wang; even the very first response from MITRE made no mention of it. 

 

3) Which specific CVEs did MITRE challenge Lin Wang on? It is good to hear it was done, but we consider it important to understand for which CVEs additional vetting was performed by MITRE.

 

4) What specific details did Lin Wang provide that was considered justification for the CVE assignments? I presume PoCs from the wording. If so, was the fact that PoCs were provided all that was required or to which extent were they tested and crashes analyzed? Did you just confirm a crash and based on that considered the reports legit with more severe impacts plausible? Or was time spent determining that these could indeed have a more severe impact as speculated (randomly guessed) by the vulnerability reporter?

 

 

 

The CVE team has been able to pick out the following list of separate issues and questions in this thread. It might help if we separate and discuss them individually.

1. Should we ban requesters when they have repeatedly provided questionable vulnerability details when requesting a CVE ID?

  1. MITRE will work with the Board to decide on a path forward here. One concern here is that banning a requester completely might send a negative message to the community in general and might be something we should avoid.
  2. Do other Board members feel that banning requesters is ever appropriate, or should we just put more pressure on them to justify their work when these cases crop up?

 

My view is that the CNA should initially ask for more information and proof when dealing with reports that are considered questionable. The key is then being able to spot such reports. If the vulnerability reporter is unable to or refuses to provide more information, no CVE should be assigned for that specific issue. If the same vulnerability reporter in the future requests CVEs for additional questionable reports and still fails to provide sufficient information, the person would be put forward to discuss being banned.

 

In such cases, this process would ensure that the CNA has been forthcoming and tried to work with the vulnerability reporter, but to no avail. While some vulnerability reporters may complain, I’m sure the larger community and - more importantly CVE consumers - will appreciate it and view the process as fair.

 

If CVE would rather avoid ruffling a few feathers and assign CVEs to questionable or even obviously invalid reports, then the trustworthiness of CVE IDs is tarnished. We believe firmly that CVE should focus on the interests of CVE consumers; not vulnerability reporters, who have been identified as repeat offenders and just want as many CVEs as possible tied to their name without making an effort.

 

 

2. How should we handle researchers like Lin Wang in the future?

 

If “researchers like Lin Wang” refers to unreliable vulnerability reporters with questionable reports lacking sufficient information, I think some type of restriction is required. I’d also suggest that MITRE would do well to invest and help educate reporters that have quality issues to further their research and provide valuable work.

 

 

  1. A better path forward might be to do as suggested and flag the requester for use in future requests. Maybe we could develop some simple process for CNAs around what additional information or details might be requested in these cases. If the requester is flagged, a CNA such as MITRE could treat the request slightly different and request additional information such as a PoC. This would likely cause extra work on the part of the CNA, but the assumption is that this case would not be the norm and wouldn’t happen often. In addition to the process, we’d have to define the parameters for when to flag a requester, and what would be required from them to get the flag removed.
  2. Do other folks have suggestions for what might be required of the requester in these cases?

 

As long as the aspiration is that the validation is done properly, then this approach sounds good to me. I concur that it is something that should only impact very few vulnerability reporters. As such, the extra work should be minimal.

 

The goal of the CNA is not and should not be to do the research on behalf of the vulnerability reporter, but simply double check the validity of the report based on existing information. Should e.g. a PoC be provided and quick testing does not suggest an impact worse than a benign crash, it’s up to the vulnerability reporter to make a better effort. He or she has to prove that the issue is within scope instead of just guessing “or possibly have unspecified other impact” as Lin Wang does.

 

I think it’s important to also consider having a way to flag CVEs that were assigned based on additional scrutiny by the CNA. That way other CNAs and CVE consumers know that additional vetting was performed. Perhaps even include a comment about what this vetting was e.g. “PoC provided privately and crash demonstrating a buffer overflow verified.” This would signal that even though the report appears to be questionable, further analysis has been conducted and the CVE ID can be trusted.

 

 

3. MITRE's inclusion criteria for what is a vulnerability.

  1. MITRE currently uses CNT2.2A as part of the criterion to decide if something is a vulnerability.  When the rule was proposed, there was discussion around issues such as this where a researcher was claiming a vulnerability that might be received as questionable by others. The consensus was that these assignments would be acceptable so long as remediation steps were available to dispute and reject.  If the Board no longer feels CNT2.2A is a valid criteria for deciding if something is a vulnerability, then a discussion may be needed in regards to using only CNT2.2B.

 

I think this approach is a disservice to CVE consumers. It’s good you suggest us revisiting that discussion. This causes organizations to waste time on issues that are either less severe or outright invalid. Such assignments should be kept to a minimum within a reasonable amount of effort.

 

As mentioned previously, I don’t think it’s the goal of CVE to aim for 100% accuracy, but some vetting should be performed to weed out questionable and obviously invalid requests. It’s problematic that CVEs are assigned to such requests with the (honestly “naive”) hope that someone else is going to dispute/reject it if wrong. This becomes more problematic when there is not even a requirement for at least public PoCs in such cases to make crowd-sourced disputes more likely.

 

That said, even if there were public PoCs, we know that crowd-sourced disputes still won’t happen to the necessary degree.

 

 

4. The community does not seem incentivized to dispute CVE assignments

  1. As mentioned above, the use of CNT2.2A seemed acceptable so long as the dispute/reject processes worked.  However, as Carsten demonstrates, those who do the research that could be used to dispute or reject a CVE aren't incentivized to provide that information to CVE. Does this change how Board members feel about CNT2.2A?

 

As commented in a previous response, this only works (and even then not really) when it’s a CVE here and there; not when it’s a significant chunk of CVEs. The new approach with the CVE request form is generating too many of these. We shouldn’t expect a crowd-sourced approach to dispute/reject CVEs. There is no value whatsoever in it for people to spend time on it. Vulnerability reporters are focused on how they can request the most CVEs for their own findings; including making money off bug bounties; not “waste time” disputing the findings of others.

 

The extremely few CVE consumers, who do spend resources to validate CVE assignments, are not going to go through the whole dispute process and provide sufficient evidence. In fact, it currently seems like much more of an effort to dispute a CVE than request one. That’s worth pondering.

 

 

5. Preventing duplicates

  1. The comments about duplicates largely relate to some deficits that existed in CNT1 (Independently fixable). In particular, CNT1 did not provide guidance on how to handle situations where there is some evidence that two issues are the same vulnerability, but you are not certain. MITRE's policy in these cases was to follow the groupings the requester used when making the request. However, in the most recent rules revision, we proposed a change to CNT1 in which the issues would be merged into a single CVE ID if there is uncertainty on whether they are duplicates.  The change was approved and MITRE will be following it going forward.  The new rules should stop these types of possibly duplicate assignments from being made in the future. This issue partially explains why CVE-2017-15773 and CVE-2017-15738 are separate.

 

This is a good change. When in doubt, a merge is better than a split. We can always split later if more details become available (rarely happens) to justify it.

 

 

6. How much certainty is required before making a decision on assigning a CVE ID?

  1. Taking a step back, much of this discussion circles around the degree of certainty needed to assign a CVE ID. How is this certainty obtained and what is the cost of obtaining the certainty? Is it worth the results?
  2. Carsten: What process did you use to determine that CVE-2017-15738 and CVE-2017-15773 were duplicates? How long did it take?

 

I simply looked at the crash output provided with one of the reports and compared it to others with similar characteristics. It was obvious pretty much immediately when processing one of them that it was likely a duplicate of the other. After that I double checked my suspicions by disassembling the affected library from the two products. Standard practice for us really; no fancy or elaborate process needed.

 

It’s hard to say how long it took, as it was part of a larger process to invalidate a large portion of that chunk of assigned CVEs. This is something we do regularly for VulnDB, so we’re talking only a few minutes, though. If I had just needed to invalidate that one issue, it would have taken longer to download and install the two products than spot the dupes and open the libraries up to confirm. In some cases, it could take longer, but anyone familiar with validating vulnerability reports should be able to perform this type of vetting within very short time.

 

/Carsten

 



Page Last Updated or Reviewed: November 13, 2017