Industrial scanners can accomodate objects with a wide range of sizes, shapes, and materials. Just as variable is the range of objectives for scanning, which can range from making precise measurements to observing gross features. Successful scanning will rely on all of these factors.
The spatial resolution in a CT image is determined principally by the size and number of detector elements, the size of the X-ray focal spot, and the source-object-detector distances. In the UTCT ACTIS scanner, the source-to-detector distance and the sizes of the detector elements are fixed. In this situation, maximum in-plane resolution is achieved by minimizing the source-to-object distance to give maximum magnification. By using offset geometries, in which the axis of rotation for the specimen is not in the center of the X-ray fan beam, higher magnification is achieved, though at a slight cost in image quality because fewer X-rays penetrate each volume element in the sample than is the case in a centered geometry.
As a rule of thumb, a CT image should have about as many pixels in each dimension as there are detector channels providing data for a view. For example, a 1024-channel linear detector array justifies a 1024×1024 pixel reconstructed image; if an offset scaning mode is used, up to a 2048×2048 pixel image may be justified.
Slice thickness, which governs the resolution in the third dimension, is determined by varying the thickness of linear apertures (slits) in front of the detectors. (Systems that use image intensifiers as detectors accomplish the same effect by selecting video lines (from the video signal) surrounding the midplane of the fan beam in smaller or larger numbers.)
Because both X-ray generation and the scattering events that produce attenuation within the object are stochastic processes, the X-ray signal is inherently noisy; the detector and its amplification electronics contribute additional noise. Thus, variations in the X-ray signals arising from these effects can obscure the variations arising from the sample itself. This noise in the intensity measurements limits the scanner’s ability to differentiate between nearby volume elements with closely similar attenuation, thereby degrading the resolution of the image. Increasing the X-ray flux and/or the counting time for each intensity measurement will bolster the signal-to-noise ratio and improve the resolution.
Because decreasing slice thickness correspondingly decreases the the X-ray flux on each detector element, attempts to gain improved resolution by using thinner slices are eventually thwarted by the need to maintain sufficient X-ray flux to generate satisfactory counting statistics. Increasing the intensity of the incident beam can help, but insofar as this will tend to also increase the focal spot size, additional blurring can result. Increasing the duration of each intensity measurement can compensate without this compromise, but can prove prohibitively costly or simply impractical if the required times are excessively long.
Conventional medical CT instruments provide resolution on the order of 1-2 mm for meter-scale to decimeter-scale objects. “High-resolution” instruments, including the high-energy subsystem of the UTCT ACTIS instrument and common industrial CT systems, provide resolution on the order of 100-200 micrometers for decimeter-scale to centimeter-scale objects. “Ultra-high-resolution” instruments, like the microfocal subsystem of the UTCT ACTIS instrument, provide resolution on the order of a few tens of microns for centimeter-scale to millimeter-scale objects. Micro-tomgraphy is perfomed using dedicated beamlines at synchrotron facilities; with such techniques, micron-scale resolution is possible within objects at millimeter to submillimeter scale (cf. Flannery et al., 1987; Kinney et al., 1993).
The ability to differentiate materials depends on their respective linear attenuation coefficients. In practical terms, successful imaging will depend on innate material properties of density and atomic composition, and on the machine parameters of the X-ray spectrum utilized and the signal-to-noise ratio. Materials with very divergent densities and/or atomic constituents are easy to differentiate. In favorable circumstances, modern CT instruments are capable of discriminating between values of µ that differ by as little as 0.1%, but only if the regions being tested are relatively large, spanning many voxels, and if there is sufficient X-ray flux to keep image noise low. As a result, spatial and density/attenuation resolution are linked: if materials are very different in their attenuation properties, very fine details or very small particles can be imaged, but if they are similar only larger-scale details and/or particles can be reliably distinguished.
Apart from the obvious constraint imposed by the size of the instrument’s sample holder (50 cm diameter on the UTCT ACTIS high-energy subsystem, and ~10 cm on the ultra-high-resolution subsystem), the maximum size of objects that can be examined by CT is determined by the need to acquire a sufficiently strong signal from the beam after it has been attenuated by passage though the object. If the object is too thick, it will absorb too much energy, resulting in low X-ray flux and poor image quality. A 420 kV X-ray tube generate beams capable of imaging geologic materials (objects with average densities close to those of common silicate minerals) with maximum dimensions up to perhaps 30-40 cm. Larger or denser objects can be imaged with special CT instruments that use very high-energy sources (e.g., linear accelerators or radioactive 60Co) capable of far greater penetration.