Rendering of Slice Thickness and Inter-slice gaps. - DICOM

This is a discussion on Rendering of Slice Thickness and Inter-slice gaps. - DICOM ; In the context of MPR... 1. I've noticed that many vendors/ applications (eg, OsiriX, Siemens' SynGo) use some kind of smoothing filter or interpolation algorithm (trilinear?) to display a 'good looking' final result, without any jaggies. However, is this a ...

+ Reply to Thread
Results 1 to 5 of 5

Thread: Rendering of Slice Thickness and Inter-slice gaps.

  1. Rendering of Slice Thickness and Inter-slice gaps.

    In the context of MPR...

    1. I've noticed that many vendors/ applications (eg, OsiriX, Siemens'
    SynGo) use some kind of smoothing filter or interpolation algorithm
    (trilinear?) to display a 'good looking' final result, without any
    jaggies.

    However, is this a correct decision?
    Isn't it better to simply replicate the slice plane t/p times (where t
    the Slice Thickness, and p the Pixel Spacing)?

    Though this would give rise to the so-called ugly jaggies (see
    http://simonsharry.blogspot.com/), it would at least preserve the
    original data and display it 100% faithfully, leading to correct
    diagnosis (eg original Hounsfield Units in case of CT will remain
    intact). We are in medical imaging domain... and not 'generic'
    Computer Graphics and Image Processing domain, where the user would
    expect the final rendered 'scene' to be visually appealing at all
    costs!

    2. A related question. Although most modality operators would, in
    my speculation, capture slightly overlapping slices (to avoid
    introducing inter-slice gaps)... (eg, 8mm slices with 3mm overlap),
    let's say the operator mistakenly introduces in/significant gaps
    between the slices. In such a situation, should the 3D diagnostic
    features (such as MPR, Volume Rendering, ...) go ahead and display
    these gaps for diagnostic accuracy, or is the expectation in the
    medical community to, once again, see visually appealing fudged up
    results (using trilinear interpolation or other sophisticated
    algorithms)?

    3. Is there any free / open-source / easy-to-use DICOM file editor
    that you would recommend... that would allow editing of the various 3D
    orientation and position related fields for creating artificial test
    data?

    Please advise. Many, many thanks.

    Harry


  2. Re: Rendering of Slice Thickness and Inter-slice gaps. [Osirix] default interpolation

    On Feb 18, 3:39 am, "Harry" wrote:
    > In the context of MPR...
    >
    > 1. I've noticed that many vendors/ applications (eg, OsiriX, Siemens'
    > SynGo) use some kind of smoothing filter or interpolation algorithm
    > (trilinear?) to display a 'good looking' final result, without any
    > jaggies.
    >
    > However, is this a correct decision?
    > Isn't it better to simply replicate the slice plane t/p times (where t
    > the Slice Thickness, and p the Pixel Spacing)?
    >
    > Though this would give rise to the so-called ugly jaggies (seehttp://simonsharry.blogspot.com/), it would at least preserve the
    > original data and display it 100% faithfully, leading to correct
    > diagnosis (eg original Hounsfield Units in case of CT will remain
    > intact). We are in medical imaging domain... and not 'generic'
    > Computer Graphics and Image Processing domain, where the user would
    > expect the final rendered 'scene' to be visually appealing at all
    > costs!


    I have not look at the Osirix source code, but my guess is that they
    are using vtkImageReslice. You can decide the type of interpolation
    you want on this class, namely:
    - Nearest neighbor
    - Linear interpolation
    - Cubic interpolation

    I don't think Osirix is using cubic interpolation, this would be way
    too slow. Anyway I am pretty sure that somewhere in the menu you can
    select not to do any kind of interpolation. Someone from Osirix to
    correct me ?

    -Mathieu


  3. Re: Rendering of Slice Thickness and Inter-slice gaps.

    On Feb 18, 12:39 am, "Harry" wrote:
    > In the context of MPR...
    >
    > 1. I've noticed that many vendors/ applications (eg, OsiriX, Siemens'
    > SynGo) use some kind of smoothing filter or interpolation algorithm
    > (trilinear?) to display a 'good looking' final result, without any
    > jaggies.
    >
    > However, is this a correct decision?
    > Isn't it better to simply replicate the slice plane t/p times (where t
    > the Slice Thickness, and p the Pixel Spacing)?
    >
    > Though this would give rise to the so-called ugly jaggies (seehttp://simonsharry.blogspot.com/), it would at least preserve the
    > original data and display it 100% faithfully, leading to correct
    > diagnosis (eg original Hounsfield Units in case of CT will remain
    > intact). We are in medical imaging domain... and not 'generic'
    > Computer Graphics and Image Processing domain, where the user would
    > expect the final rendered 'scene' to be visually appealing at all
    > costs!
    >
    > 2. A related question. Although most modality operators would, in
    > my speculation, capture slightly overlapping slices (to avoid
    > introducing inter-slice gaps)... (eg, 8mm slices with 3mm overlap),
    > let's say the operator mistakenly introduces in/significant gaps
    > between the slices. In such a situation, should the 3D diagnostic
    > features (such as MPR,Volume Rendering, ...) go ahead and display
    > these gaps for diagnostic accuracy, or is the expectation in the
    > medical community to, once again, see visually appealing fudged up
    > results (using trilinear interpolation or other sophisticated
    > algorithms)?
    >
    > 3. Is there any free / open-source / easy-to-use DICOM file editor
    > that you would recommend... that would allow editing of the various 3D
    > orientation and position related fields for creating artificial test
    > data?
    >
    > Please advise. Many, many thanks.
    >
    > Harry



    Harry>it would at least preserve the
    Harry> original data and display it 100%
    Harry>faithfully, leading to correct
    Harry>diagnosis

    Tri-linear, tri-cubic ... interpolations
    _do_preserve_the_original_pixels_intact_; thus the only "gaps" are
    filled by interpolated values. The "jaggies" technique you pointed is
    just a result of near neighbor interpolation, the most primitive
    interpolation techniques; it makes images look blocky what can not
    help to make right diagnosis.


  4. Re: Rendering of Slice Thickness and Inter-slice gaps.

    On Feb 19, 5:17 am, "stefanba...@yahoo.com"
    wrote:
    > Tri-linear, tri-cubic ... interpolations
    > _do_preserve_the_original_pixels_intact_; thus the only "gaps" are
    > filled by interpolated values. The "jaggies" technique you pointed is
    > just a result of near neighbor interpolation, the most primitive
    > interpolation techniques; it makes images look blocky what can not
    > help to make right diagnosis.


    I'm not sure I fully understand. What I meant to say was this.
    1. In any of the above interpolation schemes, I believe the original
    pixel data would just be a 1-pixel wide line (looking sideways) and
    the *bulk* of the data (would come from the thickness aspect of the
    slice, and thus) would be interpolated. Further, in the final result,
    the user wouldn't know the placement of this line worth of original
    data values... so, the feature where a user can hover the mouse over a
    CT image to see HU values... would give all non-original, interpolated
    values for majority of the image region. I'm not a medical expert,
    just a software guy, so don't know whether this would lead to wrong
    diagnosis. For example, if these non-original interpolated values
    could trigger an emergency and needless (brain) surgery, I don't know!

    2. Now, for real, inter-slice "gaps", it may (or, going by the above
    argument, may not) make sense to interpolate because data isn't really
    existing in those places.

    I'm assuming here thick (8mm+) slices. Obviously, if the total
    interpolated region is only a fraction of the total image area
    (because of ultra thin slices and/or no inter-slice gaps), replication/
    nearest neighbor wouldn't produce any perceivable jaggies anyways with
    the additional merit the original values being preserved throughout.

    It seems that the application should provide the user the choice...
    interpolation or no-interpolation, interpolation-type. The user can
    initially go by the visual appeal and double-check the original in the
    end.

    Any comments, anybody?


  5. Re: Rendering of Slice Thickness and Inter-slice gaps.

    > However, is this a correct decision?
    > Isn't it better to simply replicate the slice plane t/p times (where t
    > the Slice Thickness, and p the Pixel Spacing)?


    I think that 'better' really depends on what you want to do. I wrote
    a 3D (as in MPR, mostly) visualization application called NeuroLens
    for functional MRI. In our app I felt it was important to provide
    both options: to provide the 'blocky' depiction which accurately
    shows pixel and slice boundaries, and also to show the smooth ,
    interpolated version that generally allows you to make a better
    judgement of what the underlying structure is.

    The blocky view is important because sometimes you really do need to
    know what the actual partial volume effects are. We do a lot of image
    fusion between very low resolution functional MRI datasets (3x3x4mm
    voxels not uncommon, and sometimes even bigger) and higher resolution
    structural scans. Often the mapping of the low-res activation data
    onto a high-res scan looks nice (for this we do a linear interp by
    default), but I'm constantly having to remind users that a single
    slice from the low-res scan can cover a lot of tissue.

    The interpolated view always gives a subjectively better view of the
    actual structure though. While you might think that the blocky pixels
    are more 'true' , you could also argue the opposite - that displaying
    little squares (pixels) of varying intensity introduces information
    that is artificial. If you think of what you are doing (in MR anyway)
    in terms of sampling, you are really multiplying the 'true' intensity
    distribution (after convolving with the 3D PSF) with a 2D (or 3D)
    array of deltas. This results in a bandlimited image (which
    definitely does not consist of a lot of square pixels) and the
    'correct' (as in not introducing spatial frequencies that were not
    supported by the original sampling) way to display this on a finer
    sampling grid is to do sinc interpolation, which will preserve the
    original pixel values (which fall at zero crossings of the sinc) and
    give smoothly interpolated values in between. In practice, linear and
    cubic interpolation look pretty close to sinc interpolation in most
    cases, and in this sense I think you could argue that the interpolated
    values give a truer depiction of the blurry object that is supported
    by your sampling.

    Hope this makes some sense,

    Rick




+ Reply to Thread