Re: about that 5% - Microsoft Windows

This is a discussion on Re: about that 5% - Microsoft Windows ; Obviously, there's a better metric than the one that came up 5%. I imagine it was used because they did not wish to explain a more appropriate one to a public unwilling to learn it. If 5% or more crash ...

+ Reply to Thread
Results 1 to 2 of 2

Thread: Re: about that 5%

  1. Re: about that 5%

    Obviously, there's a better metric than the one that came up 5%. I imagine
    it was used because they did not wish to explain a more appropriate one to a
    public unwilling to learn it. If 5% or more crash daily, how many crash once
    every other day? Shouldn't that somehow be included? Perhaps they would
    count as half a percentage each. However, we want a metric that starts at
    zero for no crashes ever, and maxes out at 100% for the worst possible case
    considered. Let us begin.

    the series 1/2 + 1/4 + 1/8 + 1/16 + 1/32... = 1. This is fairly well known.

    Let N be the number of computers out there running MS-Windows of the version
    which is to be considered.

    Let M be the number of computers which crash daily. These are each assigned
    a weight of 1.

    Then, for the rest of the computers, each is assigned a weight according to
    the formula:

    Let p_c = # of crashes per # of days observed for computer c (which must be
    less than one, otherwise we throw them in with the M because on average they
    are crashing more than once per day).

    Then, the given c is assigned a weight, W_c, of 1 / ( 1 + log {base 2} p_c).

    This will be less than one, and scaled so that for every halving of crash
    time, the denominator will increase by one. For a computer that crashed once
    a day, W_c would be one.

    The final metric given to the public would then be:

    pain_MS = (M + Sum {over all c not included in M count} W_c) / N.

    You will find that 0% <= pain_MS <= 100%.

    If you wish to represent computers that crash several times per day, I'm
    sure we could design a metric for that as well. The one I have included here
    will be higher in value than 5%, with just cause: it is likely truer to what
    is going on in the system. We wish to rate the pain caused by the crashes
    rather than some abstract statement based on the abstract concept of unit,
    day. We should be even more pessimistic and include somehow ones that are
    crashing multiple times daily, but perhaps someone could give a suggestion
    on how to rate these, because it is not clearly evident how to do this. I am
    currently thinking of a proof system, as in alcohol that goes up to 200,
    rather than a 100% max. This would make it possible, probably, while
    retaining a 100 central zone for "bad," and higher than that for "really

    // u l i e n
    As you brace for what's to come
    We embrace the millenium
    You suffer from overload
    We evolve into info overlords

  2. Re: about that 5%

    talk.bizarre removed from newsgroups. I may be weird, but
    not *that* weird. Yet. :-)
    comp.os.linux.advocacy added to newsgroups; followups set thereto.

    In, Kent Paul Dolan

    on Thu, 7 Aug 2003 09:40:29 +0000 (UTC)
    > "// u l i e n" wrote:
    >> If you wish to represent computers that crash several times per day,
    >> I'm sure we could design a metric for that as well.

    > Better get on it then, the 5% figure is for computers whose MS-Windows
    > Operating Systems, WinXP or newer, crash _three or more_ times a day!
    > Obviously, any crashes at all, absent power failure ones, are a total
    > condemnation of M$-Windows quality and usability.
    > I wish I knew more about Poisson distributions; that one figure would
    > probably suffice to estimate the "at least one crash per day", which
    > by guess would be around 37% (rough cube root of 5%) to yield a 5%
    > crash three or more times a day result.
    > xanthian.

    A Poisson distribution is a standard mathematical/statistical tool, that
    much I know. Google should help (I'd have to :-) ).

    Another problem may be that 'on average; 5% crash per day' is not all
    that great a statistic; what, precisely, is it telling us?

    [1] That an "average computer" has a probability of 5% of crashing
    during a given day? (More precisely, the number of crashes divided
    by (the number of computers times operating hours) is 1 in 480 (the
    number of hours in 20 days)?)

    [2] That, if 100 computers are running Windows at 12 midnight
    and left to run doing "average" things, that at the succeeding
    12 midnight on average 5 will reboot, hang, or crash at least
    once, and possibly more than once?

    [3] Something else?

    There's also the issue of what the machines are doing.
    Are they heavily loaded (simulations, raytrace rendering,
    serious gameplaying), lightly loaded (Solitaire, MS Word
    as the writer suffers mental block, web browsing), or
    sitting there playing the modern equivalent of "find the
    path through the maze and leak" on their screensavers? :-)
    (That bug, at least, was fixed some time ago. But I wonder
    what new bug has shown up in its place.)

    I can't say Linux is the ultimate in reliability but certainly
    Windows isn't either. :-)

    As for all crashes being a condemnation if they're not related
    to power failure, I'll simply point out some obvious
    other possibilities:

    [1] Insufficient power. This is arguably why many consumer
    computers crash in the first place; if the power supply
    can't keep its voltage up under the current load things are
    going to break. Most likely this is the issue for broken
    hardware upgrades (e.g., placing a new drive in a machine),
    for example.

    [2] Hardware duds. This is an obvious one but happens occasionally.

    [3] Badly-written drivers. This one's a 50-50, in
    some ways; who is ultimately responsible? It may
    be Microsoft for publishing ambiguous specifications.
    It may be the hardware for being not quite what the driver
    writer expects, or the hardware designer, for designing
    the hardware. It may be the driver writer. It may be
    someone else's driver entirely, or two cards interfering
    as they're fighting over a resource (probably INT or DMA).

    And of course:

    [4] Badly-written system DLLs, and a kernel that (somehow)
    allows the system to become unstable if an application
    crashes. (How can we tell? We can't read the kernel
    source code or the system DLL source either.) In a good
    OS a crashing application should not render the entire
    system unstable, even if a system DLL is the culprit.
    Windows 2k and XP aspire to that and for the most part
    succeed, as far as I know. However, Linux, Unix and
    mainframe systems have had that from the beginning.

    ObJava: Not sure what the issues are regarding Java crashes.
    There are a number of things that could happen: thread deaths
    are probably the most obvious. Since Java depends on a JVM
    the reliability of Java is tied to that JVM, and the OS it
    sits on. Java should not crash but I've seen a few too
    many emergency dumps (hs-error.log, IIRC) to be entirely
    satisified. (On Linux and Solaris, yet.)

    It's still legal to go .sigless.

+ Reply to Thread