Mammography Images and Workflow Issues, was Mammography CAD SR, for Processing v.s. for Presentation
Following up on the earlier thread on this subject, I wanted
to make couple of points, and some others based on user feedback
during a site visit recently, that I thought were perhaps relevant
to this thread, and of general interest.
The site in question was the Elizabeth Wende Breast Clinic (EWBC)
in Rochester, NY, where they do nothing but breast imaging and
have digital detectors from all the vendors as well as CAD and
Images of the FOR PRESENTATION SOP Classes are intended for display,
and hence have a fully defined grayscale pipeline, and all display
systems are expected to a) be calibrated, or at least interpret the
output in P-Values, and b) be able to apply any window values or
table of data in a VOI LUT to the pixel values. Whilst it is true
that DICOM does not explicitly say what a display system "shall do"
in the context of an image (as opposed to presentation state) storage
SCP, that is the intent.
Specifically, display SCPs ignore any VOI LUT present at their peril -
users will be dissatisfied if the VOI LUT is not applied and the
images look much different from how they look on the acquisition
In addition, there is now as of CP 467 the possibility that
the window values may be intended to be interpreted in an
other than linear manner. Specifically in that CP there is
a SIGMOID shape defined. Though the window values can be
used to create a displayable image with a linear ramp, if
a sigmoid shape is intended, some users will be dissatisfied
if the display system does not apply it that way (especially
important for those making 3rd party primary mammo reading stations
to be aware of this). See:
Now, as far as displaying FOR PROCESSING SOP Classes, whilst
it is safe to say that it is no doubt possible to display
such, it is specifically intended that they NOT BE DISPLAYABLE.
That is why in the Mammo (and other DX) IODs, the VOI LUT Module is
specified as "Required if Presentation Intent Type (0008,0068) is FOR
PRESENTATION. Shall not be present otherwise." This condition on the
Module interacts with the conditions defined for the specific
attributes included in the DX Image Module, but there is no "may be
present otherwise" in their either, hence the attributes should not
be sent except for FOR PRESENTATION.
In other words IT IS ILLEGAL for Window Center and Width to be present
in a FOR PROCESSING SOP Class !
In general, and certainly in the mammo case, vendors FOR PROCESSING SOP
Instances are not readable without significant alteration of the
contrast (on a point function basis or not), and a linear pair of
window values won't be sufficient.
Also, it is important to make the distinction between FOR PRESENTATION
and DERIVED versus ORIGINAL data.
As I thought we had spelled it out in words of one syllable and a
picture in PS 3.3 C.184.108.40.206.1 Presentation Intent Type, FOR
PRESENTATION should normally be ORIGINAL, and only if further
processing than is necessary to make the images viewable is performed,
should they be DERIVED (e.g some edge enhancement applied to the FOR
PRESENTATION images to make a new image).
I mention this because I notice that several vendors are setting Image
Type to DERIVED, and this is almost certainly not what is intended.
It also appears that some vendors are not allowing their Hanging
Protocols to be driven by the value of Presentation Intent Type, so I
suspect that keying off an Image Type of DERIVED may be a hokey
workaround here to distinguish FOR PRESENTATION from FOR PROCESSING.
This is probably not the right solution.
Also, with regard to the very important CAD scenario that was
mentioned, CAD performed on FOR PROCESSING but marks displayed
on FOR PRESENTATION, and the risk of spatial transformation (even
as simple as a flip or matrix size change, without any sort of
deformation) between the two, invalidating the location of the CAD
marks, I proposed a CP (that WG 15 has cleaned up) to address this
subject. See CP 564 at:
On the subject of flips, I gather that as another source of user
dissatisfaction on display workstations, whether they be from third
parties or from the mammo acquisition system vendor, the hanging
protocols are in many cases not able to take advantage of the mandatory
patient orientation information to arrange the images (flipped if
necessary) the way the radiologist wants.
Specifically, it is often assumed that the encoding of the pixel data
is in the same row/column order as is wanted for display. Sometimes,
this is not the case, either because the vendor does it their way for
a reason, or the user wants something different from the most common
Regardless, as long as the acquisition vendor is correctly encoding
Image Laterality (0020,0062) and Patient Orientation (0020,0020), as
they all are to my knowledge, then regardless of how the pixels are
encoded the hanging protocols should be able to flip the images as
necessary to the desired orientation. That is precisely why we made
these attributes mandatory in the Mammo IOD.
In other words, if you have a right MLO and a left MLO image, and the
encoded orientation for both is A\F (rows oriented anteriorly and
columns orientated to the feet for both), and the radiologist wants
them displayed "back to back", then one has to be flipped, e.g. the
right MLO might need to be displayed P\F.
In my opinion, it is the display workstations job to describe the
desired hanging orientations for each pattern of displayed sets of CC
and MLO views, and for it to match those against the encoded
orientations and lateralities, and perform the flipping as necessary.
One could imagine doing this as a combinatorial expansion of the
possible values for the Image Laterality and row and column Patient
Orientation "triplet", but it might be better to have a hanging
protocol that "understood" the concept of orientation. Also remember
that some vendors will describe MLOs, which are oblique after all,
as A\FR or A\FL (rows oriented anteriorly and columns orientated to
the feet and right for a left MLO and left for a right MLO), rather
than simply A\F, which more accurately describes a lateral, not an
Anyway, it is highly undesirable to be instead keying of manufacturers'
names and making assumptions about how the vendors happen to encode
their pixel data (today); this unreliable approach seems to be the
state of the art, if this matter is dealt with at all.
Also on the subject of hanging protocols for mammo, another feature
I am told is desirable is for the images to abut each other when shown
"back to back", rather than to be centered in their own window as might
normally be the case, if the pixel data array happens to be a different
shape from the frame in which it is displayed. This sort of automatic
left or right shift or justification is predictable in the hanging
protocol engine if it knows the orientation and laterality and user's
preference, as it can and probably should.
Similarly, when comparing prior studies, perhaps performed on different
size sensors or different vendors sensors, or even with scanned film,
users like the images to be displayed the same size, not necessarily
"true size" (in which the physical distance on the screen matches the
physical distance on the sensor), but rather "same size", such that the
distance in one image from one exam is the same as in the other exam.
This can be achieved by choosing appropriate magnification factors
based on Imager Pixel Spacing (mandatory, always present and hopefully
The next trick is to then set the overall magnification factor to be
that which fits the "largest" breast to the full area of the display
window available, so that the user doesn't have to manually zoom
everything every time a new set of images is displayed, since many
breasts are smaller than the sensor and the full encoded pixel data
array. "Fit to screen real estate" as one user described it to me.
On another subject, and that is suppression of pixels beyond the skin
edge, I gather that some vendors detect the skin edge and make
everything outside that really black. Whether the algorithms actually
do a good job of that or not, and whether they sometimes cut off
something of interest is not the issue I wanted to raise, but rather
how the air is suppressed. The users I talked to expected the "air"
to remain dark during windowing, and were disappointed when it was not.
There was discussion of having the "air" replaced with a value that was
dark (like zero) and then excluding it from windowing by encoding it
and treating it as a Pixel Padding Value; cool idea perhaps, as long
as that pixel value never occurs inside the breast (e.g. in air in a
biopsy tract or cavity perhaps), in which case it would also be
Better perhaps to use the Bitmap Shutter Module, that is specifically
designed for this sort of thing. It is not part of the current DX and
Mammo IODs, but could be added. It is a feature of Presentation State,
and hence any display workstation that supports Presentation State
should support Bitmap Shutter in general, and hence would be able to
use it if it encountered one in a Mammo instance and new to do so. The
size of the area shuttered out would increase the size of the file
though, as it is encoded as a bitmap.
Another option, better still perhaps since it is more compact, might be
to encode an ordinary Display Shutter but as Vertices of the Polygonal
Shutter. This Module is already part of the Mammo IOD.
Both of these shutter based solutions are more complex than the Pixel
Padding Value approach, which is great if and only if the padding value
is never possible in the "real" pixel data. Since this is a confusing
area, there is (yet another) CP on Pixel Padding Value working its way
through the system. See CP 517 at:
In practice at the moment, whether or not the air appears to change as
the user windows the image, is predicated on the separation between the
air peak (or replaced air pixel value) and the breast tissue peak on
the histogram, and ultimately will change if the window is wide or low
enough of course, or is not a linear shape (e.g. sigmoid).
There are many other interesting topics related to digital mammo and
CAD interoperability, and if you are interested and available there is
a special forum at SCAR this year on the Saturday specifically to
address this, jointly hosted by SCAR and the Elizabeth Wende Breast
Clinic. See Page 16 of the program at:
This is a relatively late addition to the program, but should be a
great session. (OK, I am biased, since I am chairing the vendors'
panel, but I want enough of you folks to be there to make for an
interesting and productive discussion).
It is hoped that out of the workshop will come some topics appropriate
of an IHE or IHE-like digital mammography effort, to address such
issues as CAD workflow, ability to perform reprocessing, what should
be encoded on portable media for patients and referring doctors and
other sites (e.g. for proc + for presn + CAD SR + presentation state
with CAD marks perhaps), etc.
Re: Mammography Images and Workflow Issues, was Mammography CAD SR, for Processing v.s. for Presentation
You said a mouthful with that posting. I'd like to comment on only a
portion of your recomendations. Two points both raise a related issue:
automatic zooming to fit "largest" breast to the overall screen display
area and excluding the "air" outside the breast from the windowing
function. Both these functions require automatic recognition of the
boundary of the breast from the outside "black space" and could in fact
be implemented in a manner where that space was cropped from or
replaced with a single uniform value. The reason the air shows up and
changes brightness when the image is windowed is that space contains
sensor noise. In addition to the advantages you cite for windowing/auto
zooming, removal of that data also greatly improves the compressability
of the image: jpeg predictors and RLE encoders are really good when
the image contains long sequences of the data with the same value (as
would be the case with fixed value substitution).
However, from a regulatory point of view, how does one validate your
breast margin identification algorithm isn't throwing away clinical
data when it does the crop and or pixel value substitution. The case of
air filled cavities inside the breast provides a great counter example
for why you can't simply detect air based on pixel intensity and
substitute a fixed value.
Any suggestions on how vendors looking to provide these capabilities
can get them through a regulatory approval process so these advantages
features start showing up in the market?
Re: Mammography Images and Workflow Issues, was Mammography CAD SR,for Processing v.s. for Presentation
> You said a mouthful with that posting. I'd like to comment on only a
> portion of your recomendations. Two points both raise a related issue:
> automatic zooming to fit "largest" breast to the overall screen display
> area and excluding the "air" outside the breast from the windowing
> function. Both these functions require automatic recognition of the
> boundary of the breast from the outside "black space" and could in fact
> be implemented in a manner where that space was cropped from or
> replaced with a single uniform value. The reason the air shows up and
> changes brightness when the image is windowed is that space contains
> sensor noise. In addition to the advantages you cite for windowing/auto
> zooming, removal of that data also greatly improves the compressability
> of the image: jpeg predictors and RLE encoders are really good when
> the image contains long sequences of the data with the same value (as
> would be the case with fixed value substitution).
> However, from a regulatory point of view, how does one validate your
> breast margin identification algorithm isn't throwing away clinical
> data when it does the crop and or pixel value substitution. The case of
> air filled cavities inside the breast provides a great counter example
> for why you can't simply detect air based on pixel intensity and
> substitute a fixed value.
> Any suggestions on how vendors looking to provide these capabilities
> can get them through a regulatory approval process so these advantages
> features start showing up in the market?[/color]
Your points as to the value of removing useless information prior to
compression are well taken.
At least two of the acquisition vendors already do this, and detect
skin edges and suppress the background, judging by what I have seen.
Why exactly they do this, how they do it, how they get it approved,
and how often it works well, I cannot say - perhaps you can ask them
about skin edge detection and background suppression at the forum.
However, I will say that in the few cases I have looked at, I thought
the results were not satisfactory (some of the skin actually cut off),
and users commented about this. I am not sufficiently expert to say,
but I think there may be a problem (and an opportunity) here already.
As far as scaling the image to fit, I think that breast area detection
to figure out the scaling is of a lot less concern, since it is only for
convenience to bring up the initial display. Obviously this is harder
with the MLO than the CC with respect to the chest wall and the axilla,
but it is likely not an insurmountable problem, though I wouldn't
want to imply that this was easy.
Finally, the CAD systems also have to figure out what to include and
what not to include. Indeed there was an article about it in this
month's Radiology that you might find interesting:
Needless to say they have regulatory concerns too :)
I mention CAD, since the DICOM Mammo CAD SR allows one to encode the
breast outline, and this information could be used to drive the display
system's behavior when setting up the default scaling on the screen.
See for example the illustration in DICOM PS 3.16 TID 4008 Mammography
CAD Breast Geometry Template, for outlining the breast and/or pectoral
muscle. Indeed, I should have mentioned this when discussing options
yesterday for mechanisms to encode what part of the image to suppress
Having said that, I do not see this information encoded in any of
the Mammo CAD SR instances that I have looked at so far.
Another excellent topic for the forum perhaps.