Hyperspectral imaging
Hyperspectral imaging, like other spectral imaging, collects and processes information from across the electromagnetic spectrum. The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes.
There are two general branches of spectral imagers. There are push broom scanners and the related whisk broom scanners, which read images over time, and snapshot hyperspectral imaging, which uses a staring array to generate an image in an instance.
Whereas the human eye sees color of visible light in mostly three bands (red, green, and blue), spectral imaging divides the spectrum into many more bands. This technique of dividing images into bands can be extended beyond the visible. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths. Hyperspectral imaging measures contiguous spectral bands, as opposed to multispectral imaging, which measures spaced spectral bands.
Engineers build hyperspectral sensors and processing systems for applications in astronomy, agriculture, biomedical imaging, geosciences, physics, and surveillance. Hyperspectral sensors look at objects using a vast portion of the electromagnetic spectrum. Certain objects leave unique 'fingerprints' in the electromagnetic spectrum. Known as spectral signatures, these 'fingerprints' enable identification of the materials that make up a scanned object. For example, a spectral signature for oil helps geologists find new oil fields.
Hyperspectral imaging sensors have rapidly advanced, aiding in tumor diagnostics for in vivo brain tumors. Linescan cameras effectively distinguish between pathological and healthy tissue, whereas snapshot cameras offer a potential alternative to reduce acquisition time.
Aim: Our research compares linescan and snapshot hyperspectral cameras for in vivo brain tissues and chromophore identification.
Approach: We compared a linescan pushbroom camera and a snapshot camera using images from 10 patients with various pathologies. Objective comparisons were made using unnormalized and normalized data for healthy and pathological tissues. We utilized the interquartile range (IQR) for the spectral angle mapping (SAM), the goodness-of-fit coefficient (GFC), and the root mean square error (RMSE) within the 659.95 to 951.42 nm range. In addition, we assessed the ability of both cameras to capture tissue chromophores by analyzing absorbance from reflectance information.
Results: The SAM metric indicates reduced dispersion and high similarity between cameras for pathological samples, with a 9.68% IQR for normalized data compared with 2.38% for unnormalized data. This pattern is consistent across GFC and RMSE metrics, regardless of tissue type. Moreover, both cameras could identify absorption peaks of certain chromophores. For instance, using the absorbance measurements of the linescan camera, we obtained SAM values below 0.235 for four peaks, regardless of the tissue and type of data under inspection. These peaks are one for cytochrome b in its oxidized form at λ = 422 nm , two for HbO 2 at λ = 542 nm and λ = 576 nm , and one for water at λ = 976 nm .
Conclusion: The spectral signatures of the cameras show more similarity with unnormalized data, likely due to snapshot sensor noise, resulting in noisier signatures post-normalization. Comparisons in this study suggest that snapshot cameras might be viable alternatives to linescan cameras for real-time brain tissue identification 1).
Fabelo et al. exploit the characteristics of HSI to develop a demonstrator capable of delineating tumor tissue from brain tissue during neurosurgical operations. Improved delineation of tumor boundaries is expected to improve the results of surgery. The developed demonstrator is composed of two hyperspectral cameras covering a spectral range of 400-1700 nm. Furthermore, a hardware accelerator connected to a control unit is used to speed up the hyperspectral brain cancer detection algorithm to achieve processing during the time of surgery. A labeled dataset comprised of more than 300,000 spectral signatures is used as the training dataset for the supervised stage of the classification algorithm. In this preliminary study, thematic maps obtained from a validation database of seven hyperspectral images of in vivo brain tissue captured and processed during neurosurgical operations demonstrate that the system is able to discriminate between normal and tumor tissue in the brain. The results can be provided during the surgical procedure (~1 min), making it a practical system for neurosurgeons to use in the near future to improve excision and potentially improve patient outcomes 2).
Fluorescence guided surgery (FGS) using aminolevulinic acid (ALA) induced protoporphyrin IX (PpIX) provides intraoperative visual contrast between normal and malignant tissue during resection of high grade gliomas. However, maps of the PpIX biodistribution within the surgical field based on either visual perception or the raw fluorescence emissions can be masked by background signals or distorted by variations in tissue optical properties.
A study evaluates the impact of algorithmic processing of hyperspectral imaging acquisitions on the sensitivity and contrast of PpIX maps. Measurements in tissue-simulating phantoms showed that (I) spectral fitting enhanced PpIX sensitivity compared with visible or integrated fluorescence, (II) confidence-filtering automatically determined the lower limit of detection based on the strength of the PpIX spectral signature in the collected emission spectrum (0.014-0.041 μg/ml in phantoms), and (III) optical-property corrected PpIX estimates were more highly correlated with independent probe measurements (r = 0.98) than with spectral fitting alone (r = 0.91) or integrated fluorescence (r = 0.82). Application to in vivo case examples from clinical neurosurgeries revealed changes to the localization and contrast of PpIX maps, making concentrations accessible that were not visually apparent. Adoption of these methods has the potential to maintain sensitive and accurate visualization of PpIX contrast over the course of surgery 3).