Pixel transformation pipeline. - DICOM

This is a discussion on Pixel transformation pipeline. - DICOM ; In PS-3.4-2007, page 214, Figure N.2-1 shows the color transformation pipeline from stored value all the way up to display. For CT images, with grayscale stored pixel values, at what point in that pipeline are the values in HU? It ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: Pixel transformation pipeline.

  1. Pixel transformation pipeline.

    In PS-3.4-2007, page 214, Figure N.2-1 shows the color transformation
    pipeline from stored value all the way up to display. For CT images,
    with grayscale stored pixel values, at what point in that pipeline are
    the values in HU? It fairly clearly says it's right after the Modality
    LUT Transformation, but I just want to make sure... sorry if it's a
    silly question!

    Secondly, let's say I were to write an application to view DICOM
    images. Should the VOI LUT Transformation -always- be applied? Or is
    the Window and VOI LUT information generally optional and adjustable
    by the user?

    So just to clear this up:

    - Modality LUT takes stored values and converts them to, say, HU.
    - VOI LUT takes HU and converts them to some sort of device-dependent
    pixel intensities, and is used for viewing the images when you want
    to, say, enhance a specific range of input values.
    - Presentation LUT takes these pixel values and converts them to ... P-
    values ...

    The line between VOI LUT and Presentation LUT seems kind of fuzzy to
    me, and I'm having a really hard time getting my head around the
    Presentation LUT. If the output of the Presentation LUT is device-
    independent color values, what is the purpose of the Presentation LUT?
    How does it differ from the VOI LUT? It seems that that could all be
    taken care of in the VOI LUT Transformation?

    Oh, another thing. PS 3.14-2007, page 9, Figure 6-1, shows a
    "Polarity" step that does not exist in the figure from PS 3.4. What is
    that, and which of the two figures is correct...?

    Final question is, what is the true purpose of the Grayscale Standard
    Display Function? When should it actually be used? It's purpose seems
    to be to transform device-independent values into device-dependent
    values, but isn't that already taken care of by existing systems such
    as hardware drivers, or, say, applications that communicate with
    printers and take advantage of Pantone color matching, for example? Is
    the "GSDF" more of a recommendation...? It seems to go beyond the
    scope of the standard, I'm a little bit unclear as to how I should
    handle it in an application.

    Again, thanks for all your help guys, I'm glad this mailing lists
    exists and is so active
    Jason


  2. Re: Pixel transformation pipeline.

    Jason wrote:
    > In PS-3.4-2007, page 214, Figure N.2-1 shows the color transformation
    > pipeline from stored value all the way up to display. For CT images,
    > with grayscale stored pixel values, at what point in that pipeline are
    > the values in HU? It fairly clearly says it's right after the Modality
    > LUT Transformation, but I just want to make sure... sorry if it's a
    > silly question!


    After the Modality LUT Transformation, i.e. the application of either
    rescale slope and intercept, or the Modality LUT as such, whatever is
    present.

    > Secondly, let's say I were to write an application to view DICOM
    > images. Should the VOI LUT Transformation -always- be applied? Or is
    > the Window and VOI LUT information generally optional and adjustable
    > by the user?


    Take the VOI LUT as a non-linear alternative to Window Center and Width.
    It will most probably provide a good default display, but not more than that.
    Like with Window Center and Width, the user might want to change the
    display settings nevertheless. Note that an image can also contain
    multiple VOI LUTs (or VOI LUTs plus window center/width pairs), indicating
    that multiple viewing settings are available.

    > - VOI LUT takes HU and converts them to some sort of device-dependent
    > pixel intensities, and is used for viewing the images when you want
    > to, say, enhance a specific range of input values.


    The output of the VOI LUT is not device dependent. Like Window Center and
    Width, the VOI LUT transformation basically selects a subset of the
    available contrast range for display.

    > - Presentation LUT takes these pixel values and converts them to ... P-
    > values ...


    Exactly. Note that in most cases the Presentation LUT is an IDENTITY
    transformation, i.e. the output of the VOI LUT is already in P-values.

    > The line between VOI LUT and Presentation LUT seems kind of fuzzy to
    > me, and I'm having a really hard time getting my head around the
    > Presentation LUT. If the output of the Presentation LUT is device-
    > independent color values, what is the purpose of the Presentation LUT?
    > How does it differ from the VOI LUT? It seems that that could all be
    > taken care of in the VOI LUT Transformation?


    The Presentation LUT, when non-trivial (i.e., a real LUT), can be understood
    as a kind of Gamma correction, i.e. a non-linear transformation that improves
    image perception, but that is independent from the settings of the VOI LUT,
    i.e. always applies the same change to dark, medium and bright pixels.

    It would indeed always be possible to combine a Presentation LUT and a VOI LUT
    into one common LUT (actually DCMTK for performance reasons internally computes
    a single LUT that applies VOI LUT, Presentation LUT, and GSDF calibration in one
    step).

    However, the beauty of Presentation LUTs as a separate "object" are that they
    can be re-used for hardcopy generation, and allow for display consistency between
    hardcopy and softcopy images. Many vendors have specific proprietary "display
    curves" that their modalities apply to image data before the image data is
    printed on hardcopy. When these display curves are converted into a Presentation LUT
    that is, so to say, relative to a GSDF calibrated output medium, then the same
    curve can be used for monitors and for printers, and the result is consistent.
    The DICOMscope viewer from our group, for example, comes with one sample
    Presentation LUT that was kindly provided to us by Philips Medical Systems and
    emulates the appearance of images directly printed from a Philips Modality
    to a printer supporting the proprietary Philips display curve.
    Using this LUT as part of a GSPS makes image appearance on screen comparable
    to that on film, and also makes the display curve available on
    every printer that supports the Presentation LUT SOP Class.

    > Oh, another thing. PS 3.14-2007, page 9, Figure 6-1, shows a
    > "Polarity" step that does not exist in the figure from PS 3.4. What is
    > that, and which of the two figures is correct...?


    The DICOM Print Management Service Class describes a separate Polarity
    transformation that does not exist in the softcopy pipeline.

    > Final question is, what is the true purpose of the Grayscale Standard
    > Display Function?


    Calibrate displays, make the output of different monitors and printers
    consistent, i.e. directly comparable within the physical limitations of
    the display devices.

    > isn't that already taken care of by existing systems such
    > as hardware drivers, or, say, applications that communicate with
    > printers and take advantage of Pantone color matching, for example? Is
    > the "GSDF" more of a recommendation...?


    GSDF is for monochrome displays only and is indeed implemented in certain
    vendor's graphics controllers and display drivers and also by most DICOM printers.
    Under German law, for example, diagnostic monitors are required to be
    calibrated either using DICOM GSDF or the CIE CIELAB display function.

    Regards,
    Marco Eichelberg
    OFFIS

  3. Re: Pixel transformation pipeline.

    Thanks a -lot- for your detailed reply; that answers pretty much
    everything that I was wondering. Just a couple of minor things I'm
    still a little shaky on:

    > Using this LUT as part of a GSPS makes image appearance on screen comparable
    > to that on film, and also makes the display curve available on
    > every printer that supports the Presentation LUT SOP Class.


    So, if I have this right, you could take, say (contrived example), a
    Presentation LUT that represents, I dunno, the display curve on my
    cheap old Sanyo television, and apply it to the image data, and then
    no matter where you viewed it, it would look like it would look on the
    Sanyo TV? So then the purpose of the Presentation LUT is to allow you
    to view images as they would appear on different pieces of hardware,
    so that you could look at an image on your monitor and have it look
    just like it would look on a light box or something. And if you have,
    say, some piece of equipment that generates DICOM images and has a
    monitor on it, it might write out the Presentation LUT representing
    the display curve for that monitor to the files it generates so when
    you look at them anywhere else, it looks just like it did on the
    monitor on the original equipment?

    > GSDF is for monochrome displays only and is indeed implemented in certain
    > vendor's graphics controllers and display drivers and also by most DICOM printers.
    > Under German law, for example, diagnostic monitors are required to be
    > calibrated either using DICOM GSDF or the CIE CIELAB display function.


    I see, so the GSDF is more of a guideline for presentation hardware
    vendors? Do application developers need to be concerned with it at
    all, then? You mentioned that DCMTK applies the GSDF transformation to
    the data... is that required? Or do you do that only as an option, to
    account for display devices that -aren't- GSDF-calibrated?

    > OFFIS


    And, thank OFFIS for writing such a great toolkit, by the way!

    -Jason


+ Reply to Thread