Re: The Summer Olympic W3C Validator Games - Linux

This is a discussion on Re: The Summer Olympic W3C Validator Games - Linux ; In article , Richard Rasker wrote: > The rules of the game are simple: a particular IT-outfit's home or most > prominent site (e.g. www.google.com ) is visited, and the actual resulting > Web URL is fed into W3C's validation ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: Re: The Summer Olympic W3C Validator Games

  1. Re: The Summer Olympic W3C Validator Games

    In article ,
    Richard Rasker wrote:
    > The rules of the game are simple: a particular IT-outfit's home or most
    > prominent site (e.g. www.google.com) is visited, and the actual resulting
    > Web URL is fed into W3C's validation service, without tweaking or tuning.
    > Then it's a simple matter of count-the-errors.


    That's not up to Olympic standards. Consider this error:

    something

    The error there is that the & should be encoded as & but it is not.
    A page with this error is likely to have it on every occurrence of a
    link to a URL that has GET arguments, and the validator flags every one.

    This means that two pages, whose authors made this error, are likely to
    get different scores from it--the longer page will get more errors.

    Also, for some mistakes, the validator generates more than one error
    message, so two pages with the same number of mistakes on them can
    generate quite different error counts.

    A proper Olympic event should either separate the pages into different
    classes for competition by length (like boxing has weight classes), or
    should normalize based on page length, or just count the first
    occurrence of each error.

    It should also probably take into account the severity of the error.
    There are some errors that are so common that all browsers cope with
    them fine. One of those should count for less points than an error that
    will make the page display wrong in some major browsers.

    --
    --Tim Smith

  2. Re: The Summer Olympic W3C Validator Games

    Tim Smith wrote:

    > In article ,
    > Richard Rasker wrote:
    >> The rules of the game are simple: a particular IT-outfit's home or most
    >> prominent site (e.g. www.google.com) is visited, and the actual resulting
    >> Web URL is fed into W3C's validation service, without tweaking or tuning.
    >> Then it's a simple matter of count-the-errors.

    >
    > That's not up to Olympic standards. Consider this error:
    >
    > something
    >
    > The error there is that the & should be encoded as & but it is not.
    > A page with this error is likely to have it on every occurrence of a
    > link to a URL that has GET arguments, and the validator flags every one.
    >
    > This means that two pages, whose authors made this error, are likely to
    > get different scores from it--the longer page will get more errors.
    >
    > Also, for some mistakes, the validator generates more than one error
    > message, so two pages with the same number of mistakes on them can
    > generate quite different error counts.
    >
    > A proper Olympic event should either separate the pages into different
    > classes for competition by length (like boxing has weight classes), or
    > should normalize based on page length, or just count the first
    > occurrence of each error.
    >
    > It should also probably take into account the severity of the error.
    > There are some errors that are so common that all browsers cope with
    > them fine. One of those should count for less points than an error that
    > will make the page display wrong in some major browsers.


    These are all quite valid points -- but a counting system which takes into
    account things such as source code size, the severity of errors, and errors
    triggered by other errors involves a huge amount of work. But I know that
    my little "survey" is only slightly more significant than DooFuS' frequent
    games of Google roulette, "proving" how bad Linux sucks -- but I'm still
    convinced that pages with hundreds of validation errors really suck, even
    if the 90% of errors found consists of duplicates and second-stage errors.

    If IBM manages to deliver 30k of clean code, then why do Microsoft and the
    BSA (with a comparable amount of code on a comparable Web page) do such a
    horrible job? I think that Microsoft's Web page code in particular reflects
    their long-standing philosophy: "Never mind how crappy the code is
    written -- as long as it looks good on-screen, it's OK". This sort of
    sloppiness, negligence and incompetence leads to security and maintenance
    trouble no end; just remember previous versions of Internet Explorer, with
    its critical security holes every two or three days, for years on end,
    resulting in hundreds of millions of comprimised Windows boxes world wide.

    Richard Rasker
    --
    http://www.linetec.nl

  3. Re: The Summer Olympic W3C Validator Games

    In article ,
    Richard Rasker wrote:
    > > A proper Olympic event should either separate the pages into different
    > > classes for competition by length (like boxing has weight classes), or
    > > should normalize based on page length, or just count the first
    > > occurrence of each error.
    > >
    > > It should also probably take into account the severity of the error.
    > > There are some errors that are so common that all browsers cope with
    > > them fine. One of those should count for less points than an error that
    > > will make the page display wrong in some major browsers.

    >
    > These are all quite valid points -- but a counting system which takes into
    > account things such as source code size, the severity of errors, and errors
    > triggered by other errors involves a huge amount of work. But I know that


    Well, then, it's a good thing the Olympics are only every four years! :-)


    --
    --Tim Smith

  4. Re: The Summer Olympic W3C Validator Games

    On Sat, 05 Jul 2008 14:24:13 -0700, Tim Smith wrote:

    > In article ,
    > Richard Rasker wrote:
    >>> A proper Olympic event should either separate the pages into different
    >>> classes for competition by length (like boxing has weight classes), or
    >>> should normalize based on page length, or just count the first
    >>> occurrence of each error.
    >>>
    >>> It should also probably take into account the severity of the error.
    >>> There are some errors that are so common that all browsers cope with
    >>> them fine. One of those should count for less points than an error that
    >>> will make the page display wrong in some major browsers.

    >>
    >> These are all quite valid points -- but a counting system which takes into
    >> account things such as source code size, the severity of errors, and errors
    >> triggered by other errors involves a huge amount of work. But I know that

    >
    > Well, then, it's a good thing the Olympics are only every four years! :-)


    Sadly, the Olympics in recent years have become less popular in the USA.
    I suspect it's because of "professional athletes" being allowed, although
    the truth is back in the cold war years Russia was doing it.

    I used to love to watch the Olympics when I was a kid back in the 60's and
    early 70's.

    These days i rarely watch sports because the athletes are overpaid
    crybabies who are into all kinds of crap and are very poor role models for
    the kids.

    I hate golf, but Tiger Woods is one exception to the rule.
    Tiger Woods is a class act.
    So are some of the Nascar drivers.

    I predict the Olympics is going to be a huge bust here in the USA.
    Sadly.


    --
    Moshe Goldfarb
    Collector of soaps from around the globe.
    Please visit The Hall of Linux Idiots:
    http://linuxidiots.blogspot.com/

+ Reply to Thread