I would think that the estimated error value from, say, ntptime
describes the error range; i.e., that a given timestamp is...

+/- estimate/2 # +/- ((float)estimate)/2.0

(Where 'estimate' is the error estimate from, say, ntptime.)

....but I can't explicitly confirm this anywhere. Is there some
footnote in the FAQ or other doc I've missed that covers this?