[
Date Prev][Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
Re: Final position RE: [CVEPRI] Handling new vulnerabilities disc overed by Steve Christey
[Following this message I'm going to get quiet for a while. I have
a bunch of stuff I owe people and I'm going to be at Interop
frantically trying to get caught up on some other work. -mjr]
aleph1@SECURITYFOCUS.COM wrote:
> > you: defend them, help them, and nurture them.
>
>That is exactly what full disclosure did. It educated people
>about security by showing them the gory details of security
>vulnerabilities and defended them from unresponsive vendors.
Disclosing how to exploit vulnerabilities has educated
people about security in much the same manner that
you could educate people of the need for air bags by
randomly forcing cars off the road with a dump truck.
Certainly, it gets the point across, but there are less
damaging ways that are equally effective. In the real
world we experiment on crash test dummies. Full
disclosure fans experiment on unaware, innocent people.
> > There _ARE_ viable alternatives and I have proffered them,
>
>Feel free to point them out to me. I've not had the pleasure
>of listening to your talks on the subject.
Check out:
http://web.ranum.com/pubs/
and start with the USENIX ;Login: articles and if you're
still with me, work your way through the MP3 of my Black
Hat keynote. The "Have a Cocktail" article also has some points
on the topic.
In retrospect, I'm sure I've done an inadequate job of
hammering on the responsibility vendors should be
owning up to, though I know I've been quoted in a few
magazine articles recently pertaining to the damage
UCITA could perpetuate. I've been pounding on the
hacker side of the equation more heavily because
that's where I place the greatest blame: negligence
on the part of the vendors is not as morally repugnant
as the deliberate malice being perpetuated by hackers.
>Please, enlighten me.
I think it's actually pretty simple: some organization
that wants that kind of attention can step forward
and implement a whole lot of it unilaterally. The
rest would require some lawyering and market
lobbying but I think it's doable. Someone like ICSA
could do it. Alan Paller from SANS could do it.
Anyone who wants that kind of limelight and has
the marketing clout can do it; which means to
me that it will eventually happen if it's worth having
happen.
The scenario goes like this:
1) Establish and publish a baseline definition of
what constitutes "responsible disclosure"
This entails:
a) How to contact a vendor and who at the
vendor to inform (e.g: notarized dated letter
or email copied to trusted 3rd party - more
on this later)
b) Established minimum time to wait for a
response from the vendor
c) Established minimum time to notify the
trusted 3rd party
d) Established minimum time(s) in which
vendor must respond to problem or be
in default. Vendors complying will:
- notify as to patch schedule
- acknowledge bug
- give the disclosant a cookie
- notify trusted 3rd party
e) If vendor misses patch release (other
than with explained schedule slip) or
backs out patch, the vendor is in
default.
f) If the vendor gets the patch out as
promised, the trusted 3rd party:
- updates database to show closed
vulnerability
- gives the disclosant a cookie
g) The trusted 3rd party also keeps a
database of vulnerabilities that have
been opened or closed not through
this process
h) The trusted 3rd party keeps a database
of vulnerabilities that have been disclosed
harmfully (i.e:
- without adequate notice to vendor
- with an exploit tool
- with exploit details
as well as the handles and believed
identities of the disclosants who are
not abiding by protocol.
i) The trusted 3rd party, as an open
information provider, makes its
database available to the public
in read-only mode
j) The trusted 3rd party establishes
a redress committee which receives
and dispatches requests to correct
errors or misinformation in the database.
Corrections and deliberation are
retained in the database
k) The trusted 3rd party maintains
public online statistics about
number of open vulnerabilities/vendor,
etc.
So step 1 is complete when an acceptable
standard is established.
1a) Organizations such as USENIX, SANS,
ICSA, AICPA, begin to refer to the database as a
resource; establish a branding process
whereby vendors that are in excellent standing
in following the bug fix process are entitled
to wear a logo stating that they are an
approved vendor, etc. Establish a branding
process whereby individuals who have
properly disclosed vulnerabilities and who
are in good standing are entitled to wear a
logo or job title acronym to indicate
that they have lots of cookies and are
k-rad dudez. Encourage human resources
departments and employers to take this
standing seriously.
2) [Breaking legal ground] Having the
database and procedures for step #1 for a
while allows establishment of a baseline
acceptable practice and establishes some
clear litmus tests for responsible practice.
Lobby Washington to require that only
vendors whose products are in good standing
security-vulnerability-wise be acceptable
for purchase for federal computer systems
that are connected to Internet networks.
Encourage Audit firms, AICPA, SEC, etc,
and regulatory influencers to tell clients not
to use software on Internet systems that
was not in good standing. What we're trying
to do is build a case [This entails not letting UCITA
pass] that vendors that consistently flout
acceptable practice should be held liable.
2a) After enough time has gone by and the
project has enough momentum, engage
the high stakes game: file for a restraining
injunction against a non-compliant vendor's
shipping a particularly egregious piece
of vulnerable software on the grounds
that it is a well-documented public
menace.
3) The other side of the coin: after there
is enough documented cases of
responsible disclosure, someone
with enough cash (hopefully me
by then!) ;) can step up to the plate
and commence civil proceedings for
damages against someone releasing
a tool in violation of responsible
disclosure practices.
I believe that this approach would make
everyone reasonably happy - except for the few
hackers who were not satisfied with cookies,
and the vendors who found their products
disqualified for use until they fixed them.
Fantasy? Sure. But you know what? I bet
something much like this is going to happen
eventually. Why? Because if enough people
start thinking it's going to happen, it will.
That's how societies (eventually) decide what
is acceptable and what isn't.
>I've simply pointed out the fallacy in
>what I believed was your statement claiming vulnerability information
>has no tangible value by showing that it has value to your
>company and product.
I apologize; I was attempting fine detail with words. The
word at issue is "tangible" -- I meant that in its pure form
vulnerability information is not marketable per se (i.e., it has
no tangible value) because of the problem inherent in
selling information and preventing its propagation once
sold. In fact, it may be (as you've already pointed out)
extortion to try to sell such information under some
circumstances. Which, in itself is a fascinating argument
of a tacit admission of the knowledge's potential for
harm.
> > If you read CSI's statistics, the amount of measured lossage
> > due to security problems is increasing equally rapidly.
>
>You must own a copy of "How to Lie With Statistics".
I do; it's one of my favorite books and I generally recommend
it to anyone who deals with the topic. I had a really enlightened
stats professor who used it as the textbook for an entire
semester of grad-level stats... But I digress...
If you're trying to imply that I'm slanting numbers, I am not -
the numbers I'm citing are open to two completely different
and equally valid interpretations. It's one of the options Huff
never covered. See below:
>While
>you are indeed correct that the total number of incidents
>has grown the Internet itself has grown at a faster rate.
>Its my firm belief that the total *percentage* of
>vulnerable hosts on the Internet has gone down.
Even granted that may be so, the fact that more machines
are being broken into and more money is being lost (and
spent) on security indicates that society as a whole is
paying a higher cost than before. So even if the ratio
of people getting impacted is dropping, the number of people
getting impacted is unacceptably high.
Just because lung cancer treatments have made survival
statistics look much more attractive in terms of deaths/smoker
does not make smoking a good idea!
I.e.: I'm not interested in arguing about statistics - the
point is that the amount of suffering has increased.
> > SPEND YOUR TIME BUILDING THINGS INSTEAD OF DESTROYING THEM
> > Or is that too obvious?
>
>The basic flaw in your argument is that you equate destroying things
>with "bad".
Uh, yeah. Duh?
> I guess Consumer Reports should go out of business.
Consumer Reports is _EXTREMELY_ responsible with how
they manage the information they uncover. If they weren't,
they'd have been litigated out of existence by now. The
way consumer reports handles problems and the way the
full disclosure crowd handle vulnerabilities have no
similarities at all.
If full disclosure ideologues were running Consumer Reports,
the first time they discovered that a particular car could be
caused to explode by rear-ending it, they'd send people out
on the streets with dump trucks to rear-end innocent drivers
"to help them understand the situation" and to convey
"the seriousness of the vulnerability" to the vendor. By
the way, if full disclosure ideologues were running Consumer
Reports, they'd have been litigated out of existence by
now.
Consumer Reports is more like a "code review" team
than a bunch of hackers. They are worthy of respect
and you insult them by trying to equate them with
hackers.
>Whether you like it or not society needs people that try
>to break things. Sometimes thats the only way to make them
>better.
Sure. Carefully, and responsibly, under controlled
circumstances:
- Code reviews
- Independent 3rd party reviews
I've got a whole lab full of computers and 3 people working
full time who do nothing but try to cause my products to
fail. So I'm completely in agreement with you.
Where I disagree with you is over the value of uncontrolled,
irresponsible, random efforts to destroy things - the
results of which are then broadcast to the world at
large.
>Hardly defensive. There is no need for me to defend what can't
>be attacked.
Oh, the concept of full disclosure can and is being attacked.
I'm putting my professional reputation on the line to do it.
What I think you mean to say is:
> This conversation reminds me of a bunch of old men
>shouting that a storm is coming yet not being able to do anything
>to stop it.
"Because I don't think you can do anything about it, I
don't care what you think..."
I _do_ believe the good guys can do something, and I believe
it will happen soon. It won't happen by preaching to guys
like you - you've already made up your mind what you want
to do, and are capable of rationalizing away any objections
I may offer. The way it will happen is by using these kinds
of discussions as a way of influencing the people who
establish that reward structure the hackers crave: wake
them up, get them to realize what's going on, and
perhaps marketing yourself by publishing a new bug
will be a great way to kill a budding career in security.
See? I don't need to convince _you_ - I need to convince
the CIOs, venture capitalists, HR Directors, and journalists.
But you're a convenient sounding board for the opposing
view.
Thanks,
mjr.
---
Marcus J. Ranum Chief Technology Officer, Network Flight Recorder, Inc.
Work: http://www.nfr.net
Play: http://pubweb.nfr.net/~mjr