I'm trying to implement MPR for CT and MR.
I'm aware that the acquired image slices can be in different
orientations within the same series.
I'm aware of the slice thickness and spacing issues discussed amply in
this group.

Still have the following questions.

1. Are you aware of any implementations that do not convert the series
to voxel space during the series loading / preprocessing phase?

If yes...
I too can consider this approach. The benefit I see here is that
original information is retained with minimal recourse to
interpolation. (By minimal, I mean that at least in the orthogonal MPR,
I can display the original voxel values; in oblique MPR, obviously l'll
have no choice but to interpolate.)

If no...
I'll start thinking of a 'good' interpolation algorithm. Any
suggestions for this?

2. In the interpolation algorithm, is it legal (or, radiologically
correct) to let's say *average out* the neighboring pixels to produce
the interpolated value? Or, should we always *choose* one from the
closest of the neighbors? In other words, is it legal to say:
bone HU + muscle HU = something that's in between which may be
medically undefined?

3. Would greatly appreciate if someone could list the major steps
involved and issues faced in implementing of MPR. I can think on my own
too but am just trying to avoid learning everything the hard way.

4. I'm *very* much worried about the testing / QA part. How would I
guarantee that my final implementation is not subtly buggy? I'm
primarily a software developer, not a radiologist, and can only
identify major anatomical structures like lungs, liver, etc. So what if
my implementation is introducing slight false content and missing
actual / true content. In other words, what if I miss a real tumor and
introduce a false one?

Regards,
Harry.