DICOM Conformance Tests - DICOM

This is a discussion on DICOM Conformance Tests - DICOM ; Hi there, A while ago I posted about some of the well known issues in DICOM I have found: http://groups.google.com/group/comp....557b61e9c5ea68 I think this would be very useful to the DICOM community to make it a full DICOM conformance data tests. ...

+ Reply to Thread
Results 1 to 11 of 11

Thread: DICOM Conformance Tests

  1. DICOM Conformance Tests

    Hi there,

    A while ago I posted about some of the well known issues in DICOM I
    have found:

    http://groups.google.com/group/comp....557b61e9c5ea68

    I think this would be very useful to the DICOM community to make it
    a full DICOM conformance data tests. For example in the XML world they
    have:

    [XML W3C Conformance Test Suite]
    * http://www.w3.org/XML/Test/xmlconf-20031210.html

    This clearly indicate which part of the standard is being tested,
    and more important what is the correct behavior of a validating parser
    and non-validating parser.

    My current thought is to get started from the current gdcmData(*)
    test suite. One of the main issue will be the size of the data. I
    think that for testing most of Part 5 a small SOP class will be
    required (I am thinking of Detached Patient Management).

    I will need two category: DICOM file valid for non-validating parser
    (Part 5 only parser), DICOM file valid for validating parser (Part 3 &
    5). This give the following layout:

    - valid : contains valid files that parser are required to accept
    - invalid: contains valid files that (validating) parser are required
    to reject
    - not-part3: contains files that can be partsed but not correct (part
    3).

    This is relatively easy to produce correct version of an invalid file
    in the 'invalid' category. This is slightly harder to produce a valid
    version of a file 'not-part3' since there may be multiple ways to
    respect a condition (problem of cascading errors).

    For example, for the 'gdcm'-vendor test suite we would have:

    ..
    |-- gdcm_invalid.pdf
    |-- gdcm_not-part3.pdf
    |-- gdcm_readme.txt
    |-- gdcm_valid.pdf
    |-- invalid
    | |-- P01
    | | |-- gdcm01i01.dcm
    | | |-- corrected
    | | |-- gdcm01i01.dcm
    ....
    |-- not-part3
    | |-- P02
    | | |-- gdcm02n01.dcm
    ....
    |-- valid
    | |-- P03
    | | |-- gdcm03i01.dcm
    ....

    If someone has already started something, I'd really appreciate
    feedback and suggestion on how to get started. There is an infinite
    number of possible misinterpretation of the standard, but I'd like to
    keep the repository clean and simple.

    Thanks for your time,
    -Mathieu
    (*) http://sourceforge.net/project/showf...kage_id=284781

  2. Re: DICOM Conformance Tests

    Mathieu Malaterre wrote:
    > A while ago I posted about some of the well known issues in DICOM I
    > have found: [...]
    > I think this would be very useful to the DICOM community to make it
    > a full DICOM conformance data tests.


    Actually I am not completely certain what exactly you want to
    achieve with the collection of test objects you are discussing
    in your original post:

    - Ability of a DICOM File-set reader to successfully read (parse)
    all correct (fully conformant) SOP instances of a certain SOP class
    (all SOP classes)?
    - Ability of a DICOM Storage SCP to successfully receive and
    store ... as above?
    - Ability of a DICOM FSR of Storage SCP to read certain kinds of
    faulty objects
    - Ability of a DICOM FSR of Storage SCP to **process** certain kinds of
    correct or faulty objects (display, compress, perform measurements, perform
    3D reconstruction...)

    You only seem to discuss matters from a parser perspective here - that
    alone is of little value. The interpretation of data objects is always
    done for a specific purpose, and the interpretation is successful only
    if that purpose can be achieved. A parser that successfully reads a
    certain faulty file but cannot display the image is not of great use
    if part of a DICOM viewer. Also, how would you define a "validating parser"?
    I don't think that this XML concept directly translates to any DICOM application.
    Of course there are DICOM checkers such as David Clunies dciodvfy of OFFIS's
    DCMCHECK that perform something very similar to a "validating parser", but
    only for the purpose of validation.
    Furthermore, a collection of all kinds of errors that one has ever observed
    in DICOM objects "in the wild" would be enormously big, and I am not sure
    if it is reasonable to expect from a DICOM reader to be ably to successfully
    process every type of garbage that bad implementations have ever spit out.
    That would remove any (of the apparently rather little) incentive from
    developers to produce correct objects.

    That said, a public "knowledge base" of known problems with existing DICOM
    systems that have an installed base (for example, the well-known lossless JPEG bugs,
    the VOI LUT issues with a certain CR vendor, etc.) would be very helpful, if
    the problem could be clearly described and sample objects given. I am not sure
    how much vendors would like their names appear in such a list, though :-)
    Note, however, that in 10 years I have only rarely seen DICOM files that would
    not produce at least one error when passed through DCMCHECK, so there is some
    risk that a collection of all "faulty DICOM files" comes very close to a collection
    of all DICOM file (types) ever produced.

    Best regards,
    Marco Eichelberg


  3. Re: DICOM Conformance Tests

    Hi Marco,

    On Jul 23, 10:03 am, Marco Eichelberg
    wrote:
    > Mathieu Malaterre wrote:
    > > A while ago I posted about some of the well known issues in DICOM I
    > > have found: [...]
    > > I think this would be very useful to the DICOM community to make it
    > > a full DICOM conformance data tests.

    >
    > Actually I am not completely certain what exactly you want to
    > achieve with the collection of test objects you are discussing
    > in your original post:
    >
    > - Ability of a DICOM File-set reader to successfully read (parse)
    > all correct (fully conformant) SOP instances of a certain SOP class
    > (all SOP classes)?


    I'll focus on this one, as I believe a file need to be read before
    sending it. So this is where most errors can(should) be caught.

    > You only seem to discuss matters from a parser perspective here - that
    > alone is of little value.


    because ?

    > The interpretation of data objects is always
    > done for a specific purpose, and the interpretation is successful only
    > if that purpose can be achieved. A parser that successfully reads a
    > certain faulty file but cannot display the image is not of great use
    > if part of a DICOM viewer.


    I am just tired of broken DICOM files, where this is not clear if I am
    required to support it or not. A perfect -IMHO- example is the
    following file:

    http://cvs.creatis.insa-lyon.fr/view..._00e0_Item.dcm

    Data Element (00e1,42,ELSCINT1) is a SQ element. Well since most
    toolkit do not know this information parsing the explicit length SQ as
    a binary blob goes very smoothly in all those toolkits. Well I'd like
    this file (or something similar) be archived somewhere with a red
    flag: 'your toolkit should not parse this file'.
    Same goes for example for a Data Element with VR=DS with a 32bits
    length : it can be stored in Implicit but not in Explicit.
    Those two examples if you don't pay attention can pass your regression
    tests, but they should not.

    > Also, how would you define a "validating parser"?


    Simple things as a 18 bytes VR=AE data element. Most toolkit will
    'parse' it, but should not.
    Same thing for the following file:

    http://cvs.creatis.insa-lyon.fr/view...ACR_NEMA_1.acr

    Seems like everything is going on smoothly ? Well anyone that has a
    private dict which says that (0009,31,SIEMENS MED) is a VR=UL should
    just blow up this file. No there is only two bytes, so it cannot
    simply be VR=UL

    > I don't think that this XML concept directly translates to any DICOM application.
    > Of course there are DICOM checkers such as David Clunies dciodvfy of OFFIS's
    > DCMCHECK that perform something very similar to a "validating parser", but
    > only for the purpose of validation.


    So indeed you must have some kind of regression system that say: this
    file is correct, this file should raise an exception in DCMCHECK,
    right ?

    > Furthermore, a collection of all kinds of errors that one has ever observed
    > in DICOM objects "in the wild" would be enormously big,


    tell me about it

    > and I am not sure
    > if it is reasonable to expect from a DICOM reader to be ably to successfully
    > process every type of garbage that bad implementations have ever spit out.
    > That would remove any (of the apparently rather little) incentive from
    > developers to produce correct objects.


    That's not my goal here.

    > That said, a public "knowledge base" of known problems with existing DICOM
    > systems that have an installed base (for example, the well-known lossless JPEG bugs,
    > the VOI LUT issues with a certain CR vendor, etc.) would be very helpful, if
    > the problem could be clearly described and sample objects given. I am not sure
    > how much vendors would like their names appear in such a list, though :-)
    > Note, however, that in 10 years I have only rarely seen DICOM files that would
    > not produce at least one error when passed through DCMCHECK, so there is some
    > risk that a collection of all "faulty DICOM files" comes very close to a collection
    > of all DICOM file (types) ever produced.


    Maybe I should narrow down to a subset of severe errors, or that most
    people have struggled with:

    http://gdcm.sourceforge.net/wiki/ind...Supported#TODO

    I guess 99% of DICOM are 'parsable', so I should not spent too much
    time on those binary blob that looks like DICOM file, but are just a
    few bits away from a true DICOM file.

    Thanks for your comments,

    -Mathieu

  4. Re: DICOM Conformance Tests

    Hi Mathieu,

    > I'll focus on this one, as I believe a file need to be read before
    > sending it. So this is where most errors can(should) be caught.


    I would think that the generator of a DICOM object (like the imaging
    modality) would in most cases directly create objects in memory.
    Whether or not any standard DICOM file handling is involved before
    an object is sent - typically over a DICOM Storage service - is
    an implementation issue. The generator, however, is in most cases
    the only system responsible for faulty objects. Everyone further down
    the "food chain" only has the alternatives of failing to process
    the faulty object or ignoring the error and, as a consequence, possibly
    forwarding a faulty object (or part thereof).

    The idea of what you call a "validating parser" was actually implemented
    in a diagnostic workstation of one of the big vendors many years ago.
    That system would check the conformance of an image when read, and would
    refuse to display, process or forward it when errors were found. At
    the same time another big vendor had all systems produce UIDs with
    leading zeroes in a non-zero component, so the diagnostic workstation
    was unable to display any image from that vendor (and most other DICOM
    images as well, since most were faulty in one way or another). That
    was certainly not an acceptable implementation strategy from the users'
    perspective and was quickly changed.

    What I am trying to say is: I believe that the success of DICOM is in
    large part due to an implementation strategy that tries to be as lenient
    as possible when reading objects and (hopefully) as strict as possible
    when writing objects. At which point the leniency ends is of course
    implementation dependent and can be subject of discussion.

    > I am just tired of broken DICOM files, where this is not clear if I am
    > required to support it or not.


    Well, everyone is tired of broken DICOM files :-)
    Formally you are not required to support any broken DICOM file. If you
    decide to do so nevertheless, this is purely to make life easier to the users
    of your tools. So there is no simple answer.

    > Data Element (00e1,42,ELSCINT1) is a SQ element. Well since most
    > toolkit do not know this information parsing the explicit length SQ as
    > a binary blob goes very smoothly in all those toolkits. Well I'd like
    > this file (or something similar) be archived somewhere with a red
    > flag: 'your toolkit should not parse this file'.


    I disagree. Unless your application actually needs to interpret the
    content of the private sequence, it would be a perfectly valid approach
    to just ignore it, and thus succeed in processing the image.
    A checker tool should of course "flag" the error, but a simple viewer,
    for example, should just work.

    > Same goes for example for a Data Element with VR=DS with a 32bits
    > length : it can be stored in Implicit but not in Explicit.


    And that's not even an error - that's a weakness of DICOM's design.
    Somebody tried to save a few bits when creating the rules for Explicit VR
    and caused a nightmare for everybody else years later. But it's not a fault.
    There are objects that can be expressed in Implicit VR but not in Explicit
    VR and there are objects that can be expressed in compressed tranfer
    syntax but not uncompressed (think of a 16-bit grayscale image
    with 65535 x 65535 pixels). Unfortunately transfer syntaxes in DICOM are
    not exchangable and independent from OSI layer 7 as they should be according
    to the OSI reference model.

    >> Also, how would you define a "validating parser"?

    > Simple things as a 18 bytes VR=AE data element. Most toolkit will
    > 'parse' it, but should not.


    Again: A checker should flag this as an error, most applications should
    ignore it, unless they actually need to do something with the data element
    (such as copying it into an A-ASSOCIATE-RQ field where a 18 byte AE should
    never occur of course).

    > So indeed you must have some kind of regression system that say: this
    > file is correct, this file should raise an exception in DCMCHECK,
    > right ?


    Actually DCMTK does not have a regression system at all (shame on us!)
    But for DCMCHECK itself we do have a collection of objects where each
    object should raise certain error messages - most of those are manually
    created, though.

    > Maybe I should narrow down to a subset of severe errors, or that most
    > people have struggled with:


    Yes. Criteria could be
    - faulty object has been created by a product "in the wild" (not by a prototype)
    - faulty object has created a problem "in the wild" (e.g. viewer refused display)
    - source of faulty object (device, version number) and the fault as such are well known
    - a reasonable workaround exists

    Criterion one sorts out objects that will never be produced/seen by "real users".
    Criterion two reduces the set to errors that have actually hurt someone.
    Criterion three makes sure that only errors with well-known source are addressed
    and that they are addressable at all.

    Probably one could think of other criteria as well.

    Regards,
    Marco

  5. Re: DICOM Conformance Tests

    Marco,

    On Jul 23, 1:07*pm, Marco Eichelberg
    wrote:
    > Hi Mathieu,
    >
    > > I'll focus on this one, as I believe a file need to be read before
    > > sending it. So this is where most errors can(should) be caught.

    >
    > I would think that the generator of a DICOM object (like the imaging
    > modality) would in most cases directly create objects in memory.
    > Whether or not any standard DICOM file handling is involved before
    > an object *is sent - typically over a DICOM Storage service - is
    > an implementation issue. The generator, however, is in most cases
    > the only system responsible for faulty objects. Everyone further down
    > the "food chain" only has the alternatives of failing to process
    > the faulty object or ignoring the error and, as a consequence, possibly
    > forwarding a faulty object (or part thereof).


    I agree. But the worse bugs I had to deal with are 3rd party
    applications (poor anonymizer, incorrect cp246 interpretation,
    incomplete big endian to little endian encoding) adding bugs over
    existing bugs. It gets tricky to recognize original bug and
    supplementary bug added by post processor "down the food chain".

    > The idea of what you call a "validating parser" was actually implemented
    > in a diagnostic workstation of one of the big vendors many years ago.
    > That system would check the conformance of an image when read, and would
    > refuse to display, process or forward it when errors were found. At
    > the same time another big vendor had all systems produce UIDs with
    > leading zeroes in a non-zero component, so the diagnostic workstation
    > was unable to display any image from that vendor (and most other DICOM
    > images as well, since most were faulty in one way or another). That
    > was certainly not an acceptable implementation strategy from the users'
    > perspective and was quickly changed.


    Didn't know that. But this '0' thingy came back a few times in this
    newsgroup, so now I understand where it all comes from.

    > What I am trying to say is: I believe that the success of DICOM is in
    > large part due to an implementation strategy that tries to be as lenient
    > as possible when reading objects and (hopefully) as strict as possible
    > when writing objects. At which point the leniency ends is of course
    > implementation dependent and can be subject of discussion.


    Hum...ok I guess I never knew the medical field before DICOM, where
    every vendor had it proprietary format. So DICOM came here and says:
    you know what you are all right, this is the way to do it, you just
    need to add a couple of bits here and there.

    > > I am just tired of broken DICOM files, where this is not clear if I am
    > > required to support it or not.

    >
    > Well, everyone is tired of broken DICOM files :-)
    > Formally you are not required to support any broken DICOM file. If you
    > decide to do so nevertheless, this is purely to make life easier to the users
    > of your tools. So there is no simple answer.


    My first task was simply to define what 'broken' meant. You tend to be
    application oriented, while I tend to be library oriented. Your goal
    is for a CT Image Storage, to display it, while -from a lib point of
    view- I need to give you access to the whole DICOM DataSet and leave
    it to the users what he/she wants to do with it.

    > > Data Element (00e1,42,ELSCINT1) is a SQ element. Well since most
    > > toolkit do not know this information parsing the explicit length SQ as
    > > a binary blob goes very smoothly in all those toolkits. Well I'd like
    > > this file (or something similar) be archived somewhere with a red
    > > flag: 'your toolkit should not parse this file'.

    >
    > I disagree. Unless your application actually needs to interpret the
    > content of the private sequence, it would be a perfectly valid approach
    > to just ignore it, and thus succeed in processing the image.
    > A checker tool should of course "flag" the error, but a simple viewer,
    > for example, should just work.


    No ! Unless you actually know what's in the SQ how do you know you do
    not need it for viz ? I have even seen recently a CP about
    anonymization suggesting to keep private tags (a Philips PET attribute
    if I remember correctly). Indeed this example is relatively safe to
    ignore, but I am sure a lot of toolkit will break if a "well known"
    public SQ was sent in big endian in a little endian dataset (even with
    an explicit data length).
    Again -I guess- you are seeing it from the app point of view where you
    did not implement anything for this SQ, thus meaning this is safe to
    ignore...

    > > Same goes for example for a Data Element with VR=DS with a 32bits
    > > length : it can be stored in Implicit but not in Explicit.

    >
    > And that's not even an error - that's a weakness of DICOM's design.
    > Somebody tried to save a few bits when creating the rules for Explicit VR
    > and caused a nightmare for everybody else years later. But it's not a fault.
    > There are objects that can be expressed in Implicit VR but not in Explicit
    > VR and there are objects that can be expressed in compressed tranfer
    > syntax but not uncompressed (think of a 16-bit grayscale image
    > with 65535 x 65535 pixels). Unfortunately transfer syntaxes in DICOM are
    > not exchangable and independent from OSI layer 7 as they should be according
    > to the OSI reference model.


    Hum...I thought you were required to always be able to send a dataset
    in the default transfer syntax. Basically you are saying that the
    negociation should refuse the transaction when it knows the conversion
    is impossible ? Is this documented anywhere ?

    > >> Also, how would you define a "validating parser"?

    > > Simple things as a 18 bytes VR=AE data element. Most toolkit will
    > > 'parse' it, but should not.

    >
    > Again: A checker should flag this as an error, most applications should
    > ignore it, unless they actually need to do something with the data element
    > (such as copying it into an A-ASSOCIATE-RQ field where a 18 byte AE should
    > never occur of course).


    ok, this one is relatively simple to handle.

    > > Maybe I should narrow down to a subset of severe errors, or that most
    > > people have struggled with:

    >
    > Yes. Criteria could be
    > - faulty object has been created by a product "in the wild" (not by a prototype)
    > - faulty object has created a problem "in the wild" (e.g. viewer refused display)
    > - source of faulty object (device, version number) and the fault as such are well known
    > - a reasonable workaround exists
    >
    > Criterion one sorts out objects that will never be produced/seen by "realusers".
    > Criterion two reduces the set to errors that have actually hurt someone.
    > Criterion three makes sure that only errors with well-known source are addressed
    > and that they are addressable at all.


    I'll get started with that then.

    It still gets me mad, when people pay big bucks for an equipment with
    the "DICOM" stamps on it...while in fact it is just producing garbage.

    Thanks,
    -Mathieu

  6. Re: DICOM Conformance Tests

    Hi Mathieu,

    > I agree. But the worse bugs I had to deal with are 3rd party
    > applications (poor anonymizer, incorrect cp246 interpretation,
    > incomplete big endian to little endian encoding) adding bugs over
    > existing bugs. It gets tricky to recognize original bug and
    > supplementary bug added by post processor "down the food chain".


    Good point, and indeed more difficult to handle than bugs from the original equipment.

    > My first task was simply to define what 'broken' meant. You tend to be
    > application oriented, while I tend to be library oriented. Your goal
    > is for a CT Image Storage, to display it, while -from a lib point of
    > view- I need to give you access to the whole DICOM DataSet and leave
    > it to the users what he/she wants to do with it.


    Exactly. In more formal terms, one could talk about "conformance" vs.
    "interoperability". The starting point for my PhD thesis a couple of years
    ago was the observation that almost no DICOM object, no DICOM implementation,
    no DICOM interface I had ever seen was fully conformant to the DICOM standard
    (i.e. >95% of the images we had in our collection by that time failed DCMCHECK)
    but still DICOM worked remarkably well in clinical practice - at least for
    80% of the users. In scientific literature you often see the claim that
    conformance is a precondition for interoperability, and that the whole point
    of testing conformance is to increase the probability of interoperability of
    systems.
    However, the combination of "defensive" programming techniques and an extremely
    complex standard where hardly any implementation really support all correct
    objects (think of segmented palette color or compressed images with multiple
    fragments per frame) leads to a situation where on one hand interoperability
    often works even if systems are not fully conformant and on the other hand
    conformance is not a guarantee for interoperability. So at least in the DICOM
    world you need to look at the "interoperability problem" directly, which is,
    for example, what IHE is doing.
    That is not an excuse for bad implementations of course, and I do see a value
    in careful validation and possibly even certification of DICOM conformance
    (which does not exist today), but one needs to be aware of the limitations
    of this. Interoperability, however, is always defined from an application
    perspective - it is the ability of two systems to exchange information and
    successfully use that information for a (user defined) purpose.

    > No ! Unless you actually know what's in the SQ how do you know you do
    > not need it for viz ?


    Because all that is needed for visualizing a 2D DICOM image is defined
    in the Image Pixel Module. Vendors are not allowed to change the meaning
    of the standard attributes, so as long as the image pixel module is present
    and complete, you have all information needed to display an image, no matter
    what the rest of the dataset contains. Private tags must not change the meaning
    of the pixel data (or any other standard attribute) - they can only provide
    additional information that an application may choose to use or to ignore.

    > I have even seen recently a CP about
    > anonymization suggesting to keep private tags (a Philips PET attribute
    > if I remember correctly). Indeed this example is relatively safe to
    > ignore, but I am sure a lot of toolkit will break if a "well known"
    > public SQ was sent in big endian in a little endian dataset (even with
    > an explicit data length).


    The problem with private tags and anonymization is, of course, unless
    you know that a certain private tag does NOT contain patient identifying
    information, you have to assume that it does, and thus remove it.

    Your other example is indeed a severe bug, and most toolkits (including
    ours) will break. However, that is not a reason to blaim those tools
    that successfully read such a faulty image as doing something wrong.

    > Hum...I thought you were required to always be able to send a dataset
    > in the default transfer syntax. Basically you are saying that the
    > negociation should refuse the transaction when it knows the conversion
    > is impossible ? Is this documented anywhere ?


    I'm not saying that - this is your interpretation - and it's not interpreted
    anywhere. It is just a fact that some objects cannot be converted to default
    transfer syntax, although the creation of such objects by real-world systems
    probably still lies a few years ahead.
    The only exception that is already in place is that default transfer syntax
    can be refused if the original image encoding available to the sender is
    lossy compressed (e.g. baseline JPEG, MPEG or the like).

    > It still gets me mad, when people pay big bucks for an equipment with
    > the "DICOM" stamps on it...while in fact it is just producing garbage.


    Ack. A couple of years ago a German radiologist even coined the term
    "DICOM-Schmerzensgeld" (which roughly translates to "compensation for
    DICOM related pain and suffering") and suggested that vendors should
    be required to compensate users for their DICOM-related pains :-)
    Indeed all of the Q/A mechanisms in place for medical devices (FDA
    in the US, Medical Device Directive in Europe) seem to hardly cover
    interface issues, which is a blind spot because for the end user the
    overall distributed system (consisting of multiple medical devices)
    will only work if the interfaces work as well.

    Best regards,
    Marco



  7. Re: DICOM Conformance Tests

    Marco,

    On Jul 24, 9:31 am, Marco Eichelberg
    wrote:
    > > No ! Unless you actually know what's in the SQ how do you know you do
    > > not need it for viz ?

    >
    > Because all that is needed for visualizing a 2D DICOM image is defined
    > in the Image Pixel Module. Vendors are not allowed to change the meaning


    The example I was thinking of is described in Supp 142 - Clinical
    Trials De-identification (Page 33). Philips PET images have a 'hidden'
    slope/intercept. This may be a rare case, but definitely impact
    professional interpretation or at least quantitative results when not
    taken into account. I am also thinking of the SIEMENS medcom objects
    (apparently some kind of vectorial objects), when displayed they'll
    introduce a bias for anyone interpreting the pixel data IMHO.

    > > I have even seen recently a CP about
    > > anonymization suggesting to keep private tags (a Philips PET attribute
    > > if I remember correctly). Indeed this example is relatively safe to
    > > ignore, but I am sure a lot of toolkit will break if a "well known"
    > > public SQ was sent in big endian in a little endian dataset (even with
    > > an explicit data length).

    >
    > The problem with private tags and anonymization is, of course, unless
    > you know that a certain private tag does NOT contain patient identifying
    > information, you have to assume that it does, and thus remove it.
    >
    > Your other example is indeed a severe bug, and most toolkits (including
    > ours) will break. However, that is not a reason to blaim those tools
    > that successfully read such a faulty image as doing something wrong.


    I am not trying to blame anyone here. I am trying to do is define the
    behavior. All I am saying is that unless you have got 100% of the
    dataset read/parsed properly you cannot defined what part of the
    dataset you can discard.

    > > It still gets me mad, when people pay big bucks for an equipment with
    > > the "DICOM" stamps on it...while in fact it is just producing garbage.

    >
    > Ack. A couple of years ago a German radiologist even coined the term
    > "DICOM-Schmerzensgeld" (which roughly translates to "compensation for
    > DICOM related pain and suffering") and suggested that vendors should


    lol !

    > be required to compensate users for their DICOM-related pains :-)
    > Indeed all of the Q/A mechanisms in place for medical devices (FDA
    > in the US, Medical Device Directive in Europe) seem to hardly cover
    > interface issues, which is a blind spot because for the end user the
    > overall distributed system (consisting of multiple medical devices)
    > will only work if the interfaces work as well.


    Thanks,
    -Mathieu

  8. Re: DICOM Conformance Tests

    Just to broaden the context a bit, the single DICOM noncompliant
    implementation that has caused us the most problems in the field is
    the eFilm workstation. It will accept RLE-encoded data, but cannot
    negotiate with another AE to send the data implicit little-endian. We
    have had a number of customers who have had that awful eFilm in ther
    network and it has been a lobster trap -- data is sent to the
    workstation and other applications can't get the data from them.

    This is a violation of the standard. eFilm is required to not accept
    any format that it can't turn into implicit little-endian if so
    required -- it must be able to negotiate for implicit little endian.
    Now, they document their nonconformance in their conformance [sic]
    statement. But they just aren't allowed to do it. The net result is
    that every application that is on a network including this foul
    application is broken. The only way we have found to ensure proper
    functioning of the network in the presence of eFilm is to disallow RLE
    for all applications.

    Notice that there is nothing broken in the data stream, so our biggest
    headache is not reflected in your desired database at all. But it
    would be nice to have a central location for such outrages. Who knows,
    it might actually help motivate vendors to fix their more egregious
    violations.

    --Tom Clune

  9. Re: DICOM Conformance Tests

    There is a Dicom "Hall of Shame" at the following location which might
    interest you: http://gdcm.sourceforge.net/wiki/ind...GDCM:Supported

  10. Re: DICOM Conformance Tests

    On Jul 25, 12:47*pm, kelly.dupasqu...@gmail.com wrote:
    > There is a Dicom "Hall of Shame" at the following location which might
    > interest you:http://gdcm.sourceforge.net/wiki/ind...GDCM:Supported


    My apologies, I see the link has already been referenced here. It is
    very interesting and informative and I suggest you check it out again!

  11. Re: DICOM Conformance Tests

    On Jul 25, 2:04*pm, kelly.dupasqu...@gmail.com wrote:
    > On Jul 25, 12:47*pm, kelly.dupasqu...@gmail.com wrote:
    >
    > > There is a Dicom "Hall of Shame" at the following location which might
    > > interest you:http://gdcm.sourceforge.net/wiki/ind...GDCM:Supported

    >
    > My apologies, I see the link has already been referenced here. It is
    > very interesting and informative and I suggest you check it out again!


    I guess I didn't properly communicate my point. There is nothing wrong
    with any file or stream created by eFilm AFAIK. That is exactly the
    point I wanted to make -- it is broken in a way that undermines DICOM
    communication without involving any parsing errors as such. The
    referenced page specifically limits itself to parsing problems, and my
    point was intended to convey the fact that, at least with us, the most
    disruptive problems do not involve parsing problems at all.

    --Tom Clune

+ Reply to Thread