Photogrammetry
Photogrammetry is a technique used to create 3D models or maps of objects and environments by analyzing photographs or images of them. The process involves using specialized software to analyze the visual data captured by the photographs and extrapolating accurate measurements and spatial information from them.
Photogrammetry can be used in a variety of fields, including archaeology, architecture, engineering, geology, and surveying. It is commonly used in the creation of digital maps and terrain models, as well as in the documentation and preservation of cultural heritage sites and artifacts.
The process of photogrammetry typically involves taking multiple photographs of an object or environment from different angles and distances. These photographs are then processed using specialized software that analyzes the images and creates a 3D model or map. The resulting model or map can be further refined and edited to produce highly accurate representations of the original object or environment.
One advantage of photogrammetry is that it can be done using relatively inexpensive equipment such as consumer-grade cameras or drones. This makes it accessible to a wider range of individuals and organizations than more traditional surveying and mapping techniques.
Overall, photogrammetry is a powerful tool for creating accurate 3D models and maps, and it is increasingly being used in a variety of fields to document, analyze, and preserve the physical world.
Obtaining 3D craniofacial morphometric data is essential in a variety of medical and educational disciplines. In this study, we explore smartphone-based photogrammetry with photos and video recordings as an effective tool to create accurate and accessible metrics from head 3D models. The research involves the acquisition of craniofacial 3D models on both volunteers and head mannequins using a Samsung Galaxy S22 smartphone. For the photogrammetric processing, Agisoft Metashape v 1.7 and PhotoMeDAS software v 1.7 were used. The Academia 50 white-light scanner was used as reference data (ground truth). A comparison of the obtained 3D meshes was conducted, yielding the following results: 0.22 ± 1.29 mm for photogrammetry with camera photos, 0.47 ± 1.43 mm for videogrammetry with video frames, and 0.39 ± 1.02 mm for PhotoMeDAS. Similarly, anatomical points were measured and linear measurements extracted, yielding the following results: 0.75 mm for photogrammetry, 1 mm for videogrammetry, and 1.25 mm for PhotoMeDAS, despite large differences found in data acquisition and processing time among the four approaches. This study suggests the possibility of integrating photogrammetry either with photos or with video frames and the use of PhotoMeDAS to obtain overall craniofacial 3D models with significant applications in the medical fields of neurosurgery and maxillofacial surgery 1).
To perform a proof-of-concept study of a novel photogrammetry 3D reconstruction technique, converting high-definition (monoscopic) microsurgical images into a navigable, interactive, immersive anatomy simulation.
Methods: Images were acquired from cadaveric dissections and from an open-access comprehensive online microsurgical anatomic image database. A trained neural network capable of depth estimation from a single image was used to create depth maps (pixelated images containing distance information that could be used for spatial reprojection and 3D rendering). Virtual reality (VR) experience was assessed using a VR headset and augmented reality was assessed using a quick response code-based application and a tablet camera.
Results: Significant correlation was found between processed image depth estimations and neuronavigation-defined coordinates at different levels of magnification. Immersive anatomic models were created from dissection images captured in the authors' laboratory and from images retrieved from the Rhoton Collection. Interactive visualization and magnification allowed multiple perspectives for an enhanced experience in VR. The quick response code offered a convenient method for importing anatomic models into the real world for rehearsal and for comparing other anatomic preparations side by side.
Conclusion: This proof-of-concept study validated the use of machine learning to render 3D reconstructions from 2-dimensional microsurgical images through depth estimation. This spatial information can be used to develop convenient, realistic, and immersive anatomy image models 2).
de Sá Braga Oliveira et al. described a technical guideline for acquiring realistic 3D anatomic models with photogrammetry and improving the teaching and learning process in neuroanatomy. Seven specimens with different sizes, cadaveric tissues, and textures were used to demonstrate the step-by-step instructions for specimen preparation, photogrammetry setup, post-processing, and display of the 3D model. The photogrammetry scanning consists of 3 cameras arranged vertically facing the specimen to be scanned. In order to optimize the scanning process and the acquisition of optimal images, high-quality 3D models require complex and challenging adjustments in the positioning of the specimens within the scanner, as well as adjustments of the turntable, custom specimen holders, cameras, lighting, computer hardware, and its software. MeshLab® software was used for editing the 3D model before exporting it to MedReality® (Thyng, Chicago, IL) and SketchFab® (Epic, Cary NC) platforms. Both allow manipulation of the models using various angles and magnifications and are easily accessed using mobile, immersive, and personal computer devices free of charge for viewers. Photogrammetry scans offer a 360° view of the 3D models ubiquitously accessible on any device independent of the operating system and should be considered as a tool to optimize and democratize the teaching of neuroanatomy 3).
Fiber dissection was applied to a specimen, and the 3D model was created with a new photogrammetry method. After photogrammetry, the 3D model was edited using 3D editing programs and viewed in AR. The 3D model was also viewed in VR using a head-mounted display device.
The 3D model was viewed on internet-based sites and AR/VR platforms with high resolution. The fibers could be panned, rotated, and moved freely on different planes and viewed from different angles on AR and VR platforms.
This study demonstrated that fiber dissections can be transformed and viewed digitally on AR/VR platforms. These models can be considered a powerful teaching tool for improving the surgical spatial recognition of interrelated neuroanatomic structures. Neurosurgeons worldwide can easily avail of these models on digital platforms 4).