3D Imaging
Imaging > Imaging Techniques > 3D Imaging
In progress: Seeking additional comments and images to develop this section
Please be patient, this part of the site is under development. We are starting to build out the Imaging Wiki.
Interested in contributing to this page? Visit the Contributors' Toolbox or reach out to one of our Team Leads:
Wiki Team Leads: Brinker Ferguson, Zarah Walsh-Korb, Yi Yang
Wiki Editors: Amy McCrory, Roxanne Radpour, Charles Walbridge
Wiki Contributors: Moshe Caine, Kurt Heumiller, Dale Konkright, your name could be here
Introduction[edit | edit source]
What is 3D Imaging?[edit | edit source]
3D imaging is the process of capturing the shape, texture, and structure of real-world objects or environments in three dimensions. Unlike traditional 2D photography, which represents objects as flat images, 3D imaging creates models that simulate the actual geometry of an object, allowing for accurate measurement, visualization, and manipulation in a digital space. These models can be used for a wide range of applications, from virtual reality and video games to architecture, engineering, and cultural heritage conservation. In conservation, 3D imaging is particularly valuable for documenting, analyzing, and preserving historic artifacts, structures, and works of art, providing highly detailed records that can inform restoration and research efforts.
Types of 3D imaging[edit | edit source]
Note: 3D imaging is a rapidly evolving field. Advances in technology, such as higher-resolution sensors, more sophisticated software, and improved data processing methods, continue to enhance the precision, accessibility, and range of applications for 3D imaging. Conservation professionals must stay up to date with these developments to take full advantage of the latest tools and techniques.
For conservation purposes, 3D imaging can be divided into several techniques. Each method offers unique strengths and capabilities, though there is significant overlap in their use and application. The main techniques used in conservation include:
Technique | Overview | Advantages | Limitations |
Photogrammetry | Converts a series of overlapping photos into a 3D model through software analysis. | Accessible and versatile, requiring only a camera and software, suitable for various object sizes. | Dependent on image quality; reflective/transparent surfaces are challenging. |
Laser Scanning | Projects a laser beam onto the object, measuring reflected light to create a 3D model. | Highly accurate and fast, ideal for large-scale objects and complex surfaces. | Struggles with reflective/transparent surfaces; equipment is expensive. |
Optical Profiling | Uses light (white light or interferometry) to measure surface topography at a fine scale. | Extremely precise for capturing small-scale features, such as textures on artifacts. | Limited to small objects and surface details; requires controlled lighting. |
Structured Light Scanning | Projects a known pattern onto an object and measures its deformation to calculate 3D shape. | High accuracy and resolution, faster than laser scanning for detailed surfaces. | Difficulty with shiny or transparent surfaces; may require controlled lighting. |
Each of these 3D imaging techniques offers distinct benefits, and conservation professionals often combine them depending on the specific needs of the object or site. For example, laser scanning might be used for large architectural features, while photogrammetry or structured light scanning might be employed for more detailed surfaces or delicate artifacts.
Techniques[edit | edit source]
Photogrammetry[edit | edit source]
Photogrammetry is the “science of measuring in photos”, and is most commonly used in remote sensing, aerial photography, archaeology, architecture and other fields where we need to determine measurements from photographs.
It is based on the principle that while a single photograph can only yield two-dimensional coordinates (height and width) two overlapping images of the same scene, taken slightly apart from each other can allow the third dimension (depth) to be calculated. This is much the same way as the human visual system generates depth perception from the images projected by our two eyes.
We are able to see objects in three dimensions, judge volume, distance and relative size, all because of our stereoscopic vision. This is due to the fact that our brain receives two slightly different images resulting from the different positions of the left and the right eye and due to the fact of the eye’s central perspective.
This principle of stereoscopic viewing is the underlying principle of photogrammetry. If two photos are taken of the same object but from slightly different positions, one may calculate the three-dimensional coordinates of any point which is represented in both photos. The two camera positions view the object from so-called "lines of sight". These lines of sight are mathematically intersected to produce the 3-dimensional coordinates of the points of interest. This same principle of triangulation is also the way our two eyes work together to gauge distance.
Photogrammetry & Structure from Motion (SfM)[edit | edit source]
Nowadays, the two terms are somewhat interchangeable and are often used to convey the same thing, that is the construction of a 3D scene or object through the use of multiple photographic images.
Nevertheless, the terms are not identical and stem from slightly different approaches.
Photogrammetry literally means "measurement via light". The essential aspect of this technique is indeed measurement, triangulation, by which the three coordinates of a given point on an image are calculated from stereo pairs. Multiple images thus provide many hundreds or thousands of points which construct the "point cloud", from which the digital "mesh" is formed in the original shape of the object. In parallel, the texture data from the photographs is calculated by the processing software to form the UV or Texture Map.
In contrast to this, SfM is more forgiving. As the name implies, SfM emphasizes the process of moving round the object or scene. This may be by the mobile photographer, or by UAV. Unlike photogrammetry, SfM does not require prior knowledge of the camera positions. The SfM software automatically identifies matching features in multiple images. These may be distinctive lines, points, textures, or other clearly defined features. By tracking these features over the images taken from different positions the software calculates the positions and orientation of the cameras and the XYZ coordinates.
History of Photogrammetry[edit | edit source]
Photogrammetry is not new. Whilst we tend to align its birth with that of photography in the 1st half of the 19th century, we may In fact trace the mathematics of it back to no other than Leonardo da Vinci, who in 1480 wrote the following:
“Perspective is nothing else than the seeing of an object behind a sheet of glass, smooth and quite transparent, on the surface of which all the things may be marked that are behind this glass. All things transmit their images to the eye by pyramidal lines, and these pyramids are cut by the said glass. The nearer to the eye these are intersected, the smaller the image of their cause will appear” [Doyle, 1964]
So, in fact it could be claimed that the photogrammetric theory is based on the principles of perspective and projective geometry. Albrecht Duerer's "Perspective Machine" (1525) an instrument that could be used to create a true perspective drawing, was indeed based upon those laws of perspective. Nevertheless, it was the birth of photography which heralded the first practical uses of photogrammetry. The title "Father of Photogrammetry" must go to Aimé Laussedat (April 19, 1819 - March 18, 1907) who in 1849, was the first person to use terrestrial photographs for topographic map compilation. In 1858, he even (unsuccessfully) attempted aerial photography with kites. In 1862, his use of photography for mapping was officially accepted by the Science Academy in Madrid.
Analytical Photogrammetry[edit | edit source]
The rapid development of the computer after the 2nd world war saw the beginnings of analytical photogrammetry and algebraic based formulas which advanced digital aerial triangulation. However, the advent of the digital photograph and advanced software for image data processing saw this evolution bloom into a fast and practical field for interactive photographic imaging. Today, many photogrammetric based software solutions exist on the market. Some are based on multiple images and use triangulation principles much like 3D scanners, while others use the stereo image principle. It is important however to remember that stereo imaging is not 3D photographic modeling. The camera, like the human eye cannot calculate what it cannot see. It may guess, extrapolate, average, calculate and estimate based on the information available. However, true depth calculation can be performed only if the area in question was captured by the imaging system. In two overlapping images there will be large areas of information absent by occlusion. Nevertheless, even if full, and analytically accurate information is not present, interactive stereo images can greatly enhance our perception of the object. This is most apparent in reliefs, three dimensional surfaces and rough textures.
Equipment[edit | edit source]
One of the greatest advantages of Photogrammetry and SFM lies in their very modest hardware requirements.
Camera[edit | edit source]
For basic location-based work all that is needed in a consumer camera. Obviously, the better the camera, the better potential results. Today, however, even good smartphone cameras can yield remarkably good results.
Sometimes, a tripod may be useful. While in other cases the freedom of movement may be an advantage.
UAV[edit | edit source]
For aerial photogrammetry, a UAV (drone) is often essential. Many such models exist on the market today and their photographic quality is ever improving.
Lighting[edit | edit source]
For studio-based photogrammetry or SfM, good controllable lighting is essential. Today LED light banks are becoming very popular due to their powerful output, color control and lack of heat.
Revolving base[edit | edit source]
In many cases, a revolving base plate can prove a great help, as it allows for convenient and accurate rotation of the object relative to the camera. Such plates can range from a simple 'lazy Susan' which can be purchased at any home store, to highly sophisticated computer-controlled systems.
Computer[edit | edit source]
Here the equation is simple. The faster, the more RAM, the better graphic processor, the more storage, the better.
Miscellaneous[edit | edit source]
Other important items may include:
A color checker for color balance.
Markers to be placed around the object or area.
Reflectors and/or diffusers for controlling the light.
Polarizing filters, both for the camera and for the light source (cross-polarization).
Ample storage
Photogrammetry Software
The imaging stage in photogrammetry, while essential, is by no means the only important stage. Transforming the multiple images into a digital model demands the use of extremely sophisticated software. Here again there exists a range of photogrammetry software solutions, ranging from the free/open source, to the expensive.
Among the free options it is worth mentioning the following:
- AliceVision – Meshroom
- 3DFlow - Zephyr Free (limited to 50 images per model)
- MicMac (command-line based software developed by the French National Geographic Institute and the French National School of Geographic Sciences.)
- ColMap (SfM and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface.)
- Visual SFM open-source )
- Open MVG (which stands for “Open Multiple View Geometry”)
Nearly all paid photogrammetry software offers a free trial or a reduced price for educational use. All are excellent and each have their advantages and disadvantages. While this is by no means a complete list, The most popular among these include:
- Agisoft Metashape (Standard and Pro)
- Autodesk Recap Photo (Part of Recap Pro)
- Capturing Reality
- 3DFlow Zephyr (in addition to the free version above, they offer a light version and a full version)
- Bentley - Context Capture
- Pix4D Mapper (specifically for drone mapping)
- Trimble Inpho (dedicated to a geospatial use)
- Photometrix - iWitnessPRO(close range and aerial photogrammetry. Best known for accident reconstruction and crime scene forensics)
Pre and Post-Processing Software[edit | edit source]
Prior to loading into the photogrammetry software, the images may well need to undergo further processing. This may include conversion from RAW formats, correction of highlights and shadows, noise removal, color de-fringing, etc. All such processing will need to be performed on the entire set of images, ranging from tens to thousands. Therefore, it is essential to perform this task as a batch process.
A popular software for this purpose (though by no means the only one) is Adobe Lightroom.
It is important to consider that most photogrammetry models require some extent of post processing to ready them for final use. These include:
- Decimation: reducing the number of polygons to a convenient size for print or display.
- Smoothing: Most photogrammetry software has difficulty to some extent when dealing with the reproduction of flat and smooth surfaces. Post processing can aid in smoothing these areas.
- Watertight preparation: Essential for for 3D printing
- UV texture map retouching: This may be as simple as the removal of unwanted markers, and as complex as adding or correcting defects in the photographic texture maps.
- Preview and inspection: Once the final model is ready it is essential to preview and inspect it carefully. Defects may be discovered and need to be fixed. Measurements may need to be made, and other types of analysis performed.
- Presentation: Finally, the model will need to be presented, whether in physical printed form of (more often) as a digital display. These may be offline or online, adapted for mobile or fixed computer display, etc.
Photogrammetry Data[edit | edit source]
The essential ingredient in the data capture process in good clear and sharp images. Or to put it another way, GIGA (garbage in, garbage out). The major detrimental factors to successful capture include:
Lack of focus - As the photogrammetric process is based upon identification of multiple small and clearly defined points of interest across images, blurred images prevent the processing software from identifying such points.
Blurred images can occur either through lack of accurate focus, lack of depth of field, camera movement, or even physical obstruction such as dirt or fingerprints on the lens.
Object movement - even if our photography is clean and sharp, movement of the object can result in the same problems of blurring.
Solution- For these reasons it is essential to apply professional standards of photography, including: Quality of camera and lens, use of tripod when possible, adequate lighting to allow for short shutter speeds and depth of field.
Shiny, reflective or transparent subjects - These are notorious for photogrammetric imaging. The reason for this is that they don’t actually have any defined points of interest on them for the software to relate to. Worse still, the points which the camera thinks that it sees are actually reflections like those in a mirror and these will shift as does the camera, from image to image, confusing the software.
Solution - In many cases, the use of a polarizing filter on the camera can cut the reflected light and reduce the confusion.
In more extreme cases, “cross polarization” involving polarizing both the light source and the camera lens, can dramatically improve the results. This method will apply obviously only to cases whereby we can control the light source, usually in a studio environment.
Thin lines, such as hair, wires, poles, etc. - Photogrammetry relies on the placement of as many as possible points of interest on the subject. Very thin elements by their nature make this extremely difficult. The lack of points of interest (poi) result in there being insufficient data to build up the point cloud in those areas.
Solution - Obviously, anything that can increase the poi in thin areas will improve results. This includes moving in as close as possible to them, using the sharpest lenses, large image sensors with a large pixel count and reducing camera noise by lowering the camera ISO.
Occlusion - photogrammetry, especially SfM, relies on comparing points of interest across several images. As the camera (or rotating subject) moves, parts of the subject closer to the camera can occlude those behind. What the camera doesn’t see it cannot reproduce. Therefore, the greater the occlusion in the image set, the greater will be the error and the less accurate the final model.
Solution - Take as many pictures as possible, from as many as possible angles. Remember: there’s no such thing as too many images. Some may be superfluous, but more is always better than less.
Flat, smooth areas - walls, ceilings, flat textureless surfaces, are a constant challenge, for the simple fact that they lack points of interest and focus for the point cloud construction process.
Solution - in many cases and where possible, adding markers, stickers, and when possible even “messing up the surface”, can help immensely. In most cases, these information points will give the software points to latch on to. If the sole purpose is to reproduce just the surface shape, then these will not present a problem. If however the photographic texture map is also necessary, careful post processing in image processing software can remove the offending items and restore the clean surface.
Repeating & Symmetric patterns - SfM is based on the principle of the identification of the same points of interest in sequential images. Fences, patterned walls, floors and other surfaces with a repeating pattern can confuse the software into miss-identifying different elements as being the same.
Solution - If and where possible, breaking the symmetry can help. Either by adding random objects on the wall or floor, or by the use of markers as above.
Flashing or moving lights - can confuse the software. Television screens, LCDs, car headlights, and other light sources which change position, shape, color, or intensity between the photographs, may corrupt the triangulation and point identification process.
Solution - Avoidance where and as much as possible.
Lens Distortion - Very wide angle or long focal length lenses both distort the image, either by exaggeration of distance and perspective, or by condensing and reducing it. While modern photogrammetry software can deal very well with slight distortions, exaggerated ones can lead to stitching problems and poi identification.
Solution - Recommend focal lengths lie within the 28 - 70 mm for 35 mm DSLR cameras with a full frame sensor. In addition, it is extremely important to shoot with cameras that record the camera lens metadata, as the software will read this and its algorithms will act on this information.
ISO and compression noise - As stated repeatedly, the SfM and photogrammetry process is reliant on clean and “True” information. The jpeg format is a lossy compression method. Highly compressed jpeg images remove pixel information and replace it with compression blocks through a process of quantization. These will change from image to image and once again create false information.
Likewise, raising the ISO of the camera raises the sensitivity of the sensor to low light. While this can assist greatly in capturing otherwise undetectable areas in the image, this creates false pixel information in the form of noise.
Solution - where possible it is alway advisable to shoot in RAW format, which is devoid of compression or other camera induced distortion. Likewise it is recommended to keep the ISO as low as possible, ideally no higher than 200. If necessary, compensate with longer shutter speed and tripod shooting.
Exposure - Under or over exposure in the photograph result in areas devoid of information. Lack of information results in lack of poi and poor point clouds or models with holes. Tricky and difficult shooting situations will include: High contrast lighting, direct harsh sunlight or projectors, subject matter with inherently high internal contrast.
Solution - Prevention is the best form of medicine.
Shoot in softest (shadowless) possible light.
Overcast cloudy days are ideal.
Light-tent for studio shooting.
Shoot in RAW format, then boost shadows and reduce highlights in post processing (such as Adobe Lightroom).
Baked Light - In nature, depth and texture identification are based on light and shadow. This is one of the most basic aspects of photography and its correct use is essential to the creation of a good 2D image. In 3D imaging however, the opposite is the case. It is essential to avoid misrepresenting the image by introducing light and shadow which are not part of the subject matter itself. If the 3D image is to be printed, the light and shade will be created as, when and where the object is placed. In 3D screen based representation the subject will be lit with software based lights. The existence of light and shadow on the photographic texture map is called “baked light” and confuses the external lighting.
Solution - As with the previous section (exposure), the solution lies in correct, shadowless lighting. However, not always is this possible, especially in outdoor situations where there is less control. In these cases, the importance of RAW shooting and post processing to boost shadows and reduce highlights is essential.
Furthermore, some software solutions exist today for a process named “delighting”. This software manually or semi automatically identifies the highlight and shadow areas in the model and tries its best to reduce them.
Laser Scanning[edit | edit source]
Laser scanning is an automatic, active acquisition technique that uses laser light to record the 3D topographical and structural details of a target by generating a point cloud from the reflections of the laser beam directed at the object’s surface (Grussenmeyer et al. 2016). The scanner identifies the relationship between itself and the object by emitting and capturing the reflected laser signal, which is then processed to create a point cloud. The point cloud is a collection of points representing an object's surface in a Cartesian (x, y, z) coordinate system. This raw survey data includes information on reflection intensity. Color can be added to the points using imagery from the on-board camera or external photography during processing. More advanced, primarily airborne, instruments can also capture details about the range of reflections from a laser pulse, known as full waveform scanning (Historic England 2018).
Originally developed for space and defense applications, where it aided unmanned vehicles through computer vision, 3D laser scanning has since been adapted for use in fields such as archaeology and cultural heritage conservation. In these fields, it is widely used to document sites, buildings, statues, inscriptions, paintings, and other objects that exhibit intricate details or surfaces in relief. The advantage of 3D laser scanning is that it is non-invasive and non-destructive, making it ideal for documenting fragile or significant cultural heritage items. This technology not only captures 3D attributes with exceptional accuracy but also facilitates remote measurement and monitoring over time, allowing comparisons to detect changes.
The process of 3D laser scanning begins by directing the laser beam at the object’s surface. The reflected signal is captured, and the system calculates the distance between the scanner and the object based on the type of scanning method used. The scanner collects millions of points, creating a point cloud that represents the object’s geometry. This point cloud can then be processed to create a mesh and, if required, a textured 3D model.
Laser scanners typically use a conical field of view to capture surface measurements across large areas or objects. Internal and external tracking systems in the scanner continuously record its position relative to the object, ensuring consistent and accurate data collection. The data can be revisited over time to compare point clouds, making it an invaluable tool for monitoring wear, damage, or conservation efforts.
Scanning Methods[edit | edit source]
There are three primary methods used in 3D laser scanning:
- Triangulation: Commonly used in cultural heritage applications, this method projects a laser line or dot onto the target's surface and measures the angle of reflection relative to the scanner's known position. The triangulation method is highly accurate at close range and is typically used for detailed scanning of smaller objects or surfaces.
- Time-of-Flight (ToF): This technique measures the time it takes for the laser to travel to the object and back. It is more suitable for large-scale or long-range scanning, such as for buildings and archaeological sites, but is slightly less accurate than triangulation.
- Phase-Shift: This method modulates the laser beam and calculates the phase difference between the emitted and reflected light. It is faster than time-of-flight scanning and offers high accuracy, making it useful for medium-range applications.
Equipment[edit | edit source]
3D laser scanning requires specialized hardware that typically includes:
- Laser Scanner: The central piece of equipment, which emits and captures the laser signal.
- Examples include FARO Focus, Leica ScanStation, and Trimble TX series.
- Tripod or Mount: To stabilize the scanner and ensure precise, consistent measurements.
- Reflective Targets (optional): Used for calibrating and aligning multiple scans of large objects.
Most laser scanners have built-in GPS, inclinometer sensors, and digital compasses to track their position relative to the object being scanned.
Software[edit | edit source]
To process and analyze the data collected by a 3D laser scanner, specialized software is required to convert the point cloud into a usable model:
- Point Cloud Processing Software: Converts the raw point cloud data into a mesh or surface model.
- Examples include Autodesk Recap, Faro Scene, and Leica Cyclone.
- 3D Modeling Software: Used for further refining the mesh, adding textures, or analyzing measurements.
- Examples include MeshLab, Autodesk Maya, and Blender.
Some software options include features for aligning multiple scans, smoothing surfaces, and decimating polygons to reduce model size for easier processing.
Considerations[edit | edit source]
Applications in Conservation
In cultural heritage conservation, 3D laser scanning is used to capture detailed records of:
- Archaeological sites: Large sites can be scanned to create accurate 3D maps for study and preservation.
- Historic buildings: The geometry of buildings, including architectural details, can be precisely recorded.
- Statues and monuments: Highly detailed scans of sculptures or monuments preserve their form for future reference and analysis.
- Inscriptions and reliefs: 2.5D surfaces such as carved inscriptions or bas-reliefs can be documented with high precision, revealing details that may not be easily visible to the naked eye.
- Paintings: While less common, 3D scanning can be used to document the surface texture and paint layers of flat or relief artworks.
Advantages[edit | edit source]
- High Accuracy: 3D laser scanning captures intricate details with a high level of precision.
- Non-Destructive: It is a completely non-invasive method, ensuring that no damage is done to the object during documentation.
- Repeatability: Scanning can be repeated to track changes in an object or structure over time.
- Versatile: Suitable for a wide range of surfaces and objects, from large buildings to small statues.
- Measurement: Facilitates detailed measurements, which can be used for restoration, reconstruction, or analysis.
Challenges[edit | edit source]
- Reflective or transparent surfaces can present difficulties for laser scanners, as they do not provide consistent or accurate reflections for the scanner to interpret.
- Large, complex objects may require multiple scans from different angles to fully capture all areas, increasing the time and effort required.
Optical Profiling[edit | edit source]
Optical profiling/profilometry is an optical technique to enable precise, quantitative, and non-contact extraction of topographical data from a surface. This surface topology data can be used to study surface roughness and structure [1]. There are various optical methods to achieve optical profiling.
History of Optical Profiling[edit | edit source]
Digital microscope was fist invented in 1986 in Tokyo, Japan. A step motor is later introduced to digital microscope to enable the scanning of focal plan [6] to create 3D measurements.
Beginning in the early 2000s, various groups around the world have demonstrated that data collected through an Optical Coherent Tomography (OCT) scanner can lead to new non-invasive art conservation methods and viewing experiences. The data collected by OCT include a painting’s 3D surface topography data in micrometer resolution; layer structure data under the painting’s surface; volumetric data of the painting that can be used for layer analysis [5][8][9]. One application for OCT scanning is to study the thickness and structure of varnish layers of painting, which could be used for real-time monitoring of the ablation, melting and evaporation, or exfoliation of the varnish layer [10]. Due to its cross-sectional imaging ability, other applications can be facilitated: revealing hidden alterations such as retouching and overpainting [8], characterization of varnish [11], punchwork and underdrawings in panel paintings [12], brush strokes, surface craquelure, paint losses, and restorations [13]. Conservators have also used OCT to collect surface and subsurface information on objects such as jade [14], wood [15], Egyptian Faience [16], plastic sculptures, Limoges enamels [17], and tomb murals [9].
Starting in the 2000s, THz-TD systems has been used for the global mapping of stratigraphy of an old-master painting [18], inspection of subsurface structures buried in historical plasters [19], enhancing structural features of lacquered screen such as repaired areas, and shed light on the applied techniques [20], monitoring of the conservation process of a mural [21][22].
Optical Profiling Hardware[edit | edit source]
a. 3D Digital Microscopy[edit | edit source]
A typical optical microscope focuses onto a single plane at any given time. A 3D digital microscope utilizes focus-variation technique to scan an object through multiple focal planes. 3D images are obtained by compiling the in focused data. 3D digital microscope enables the view of the entire magnified object in focus. Both 3D data and color data are taken simultaneously. Thus, the surface profile of the object can be recovered and measured [2].
b. Optical coherence tomography (OCT)[edit | edit source]
An OCT system is based on the basic concept of a Michelson-type interferometer. An infrared light (840nm to 1310nm) source with a wideband spectrum passes through a beam splitter, the light beam is split into a sample arm and a reference arm. The depth information is obtained by introducing a mechanical lateral-scanning mechanism at the reference arm in a time domain OCT. In a spectral-domain OCT, the detector captures the spectral information of the interfered signal and uses a Fast Fourier Transform (FFT) algorithm to resolve the depth information. A typical OCT system can achieve a spatial resolution of ~ 1 µm [3].
c. Terahertz time-domain imaging (THz-TD)[edit | edit source]
An ultrashort pulsed laser is first sent to a beam splitter to be split into pump and probe beam. The pump beam is used to generate a broadband of THz radiation. This radiation is guided through the sample and thus, contains the structural information of the sample. used to generate the terahertz pulse. The probe beam undergoes an adjustment in path length using an optical delay line, enabling gated detection of the THz signal from the sample. Both amplitude and phase information of the frequency components are measured to generate the structural information of the sample [4].
Optical Profiling Software[edit | edit source]
Most data taken from 3D digital microscope, OCT and THz-TD system are in 3D point cloud format. Therefore, software that converts point cloud data into mesh or 3D model is recommended.
Case Studies[edit | edit source]
3D Microscopy
See the 'Girl with a Pearl Earring' painting in 10-gigapixel detail
Optical coherence tomography (OCT)
Terahertz time-domain imaging (THz-TD)
Structured Light Scanning[edit | edit source]
Structured Light Scanning (SLS) is a method of close-range topographical surface modeling of 3D objects using a 2D camera and a projected light pattern. The 3D object distorts the projected image of the pattern, and the camera measures the relative distance of each point in the resulting image, rendering a 3D model of the object.
History of Structured Light Scanning[edit | edit source]
Structured light imaging techniques were developed in the 1970s when researchers the contour lines of 3D objects by illuminating them with special masks. [1]. The improvements in camera resolution and computational power enabled the proliferation of SLS. Over the years, researchers have used SLS to image the 3D surface of the human body [2][3] and for other industrial applications [4].
Structured Light Scanning Hardware[edit | edit source]
An SLI system typically includes a projector and a camera. The projector projects “structured light”, which contains specially designed 2D spatially varying intensity light patterns, onto the object. Depending on the type of the system, the spatial pattern can be in black-white, greyscale, color, stripes, or gridline. A camera captures a 2D image of the scene which contains the object and light pattern projection. The 3D surface shape of the object is calculated based on the information from the distortion of the projected structured light.
There are three aspects to evaluating the performance of an SLS system.
1. Accuracy: the maximum deviation of the measured value from the actual dimension of the 3D object.
2. Resolution: the smallest portion of the object surface that a 3D imaging system can resolve.
3. Speed: the system speed can be affected by many factors, such as the frame rate, a single-shot system or sequential shot, and the system’s computational speed.
Other factors such as field of view, depth of field, and standoff distance should also be considered [4].
At this moment, most handheld SLS systems have a 3D resolution of around 0.2mm.
Structured Light Scanning Software[edit | edit source]
Most commercial SLS system provides software to process the 3D data. The main difference is that some systems are able to process both geometry and texture data, meaning that the output model will be a textured 3D model. Some systems can only process geometry data, meaning that the output 3D model does not contain textured data.
Output 3D formats: OBJ, PLY, WRL, STL, AOP, ASC, PTX, E57, XYZRGB
Computer systems: since processing the 3D data requires substantial computation power, most SLS system requires Intel Core i7 or i9, 32+ GB RAM, NAVIDIA GPU with 2+ GB VRAM, CUDA 6+ [5].
Case Studies[edit | edit source]
The Bacchus Conservation Project: a multidisciplinary team 3D scanned the North Carolina Museum of Art’s Statue of Bacchus and the various other fragments that once were attached to it. ( https://ncartmuseum.org/bacchus_under_structured_light/)
Structured-light 3D scanning of exhibited historical clothing: historical costumes have been 3D scanned through SLS [6].
Periodical Conservation State Monitoring of Oil Paintings: SLS can be used to continuously monitor the state of oil paintings by 3D scan the object periodically and comparing their 3D models [7].
Additional links:[edit | edit source]
https://www.youtube.com/watch?v=3S3xLUXAgHw
https://www.nature.com/articles/s41566-021-00780-4
References[edit | edit source]
1. P. Benoit, E. Mathieu, “Real time contour line visualization of an object,” Optics Communications, 12, 175-180 (1974)
2. (N. G. Durdle, J. Thayyoor and V. J. Raso, "An improved structured light technique for surface reconstruction of the human trunk," Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341), 1998, pp. 874-877 vol.2, doi: 10.1109/CCECE.1998.685637.)
3. S. M. Dunn, R. L. Keizer and J. Yu, "Measuring the area and volume of the human body with structured light," in IEEE Transactions on Systems, Man, and Cybernetics, vol. 19, no. 6, pp. 1350-1364, Nov.-Dec. 1989, doi: 10.1109/21.44059.
4. Jason Geng, "Structured-light 3D surface imaging: a tutorial," Adv. Opt. Photon. 3, 128-160 (2011)
5. https://www.artec3d.com/portable-3d-scanners/artec-eva#specifications)
6. Montusiewicz, J., Miłosz, M., Kęsik, J. et al. Structured-light 3D scanning of exhibited historical clothing—a first-ever methodical trial and its results. Herit Sci 9, 74 (2021). https://doi.org/10.1186/s40494-021-00544-x
7. P. D. Badillo, V. A. Parfenov and D. S. Kuleshov, "3D Scanning for Periodical Conservation State Monitoring of Oil Paintings," 2022 Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), 2022, pp. 1098-1102, doi: 10.1109/ElConRus54750.2022.9755461.