|
|
Line 1: |
Line 1: |
| {{3D computer graphics}}
| | Nice to meet you, my name is Refugia. My working day occupation is a meter reader. The factor she adores most is physique developing and now she is attempting to make cash with it. Years in the past we moved to North Dakota.<br><br>My page [http://www.blaze16.com/blog/255682 over the counter std test] |
| | |
| A '''3D scanner''' is a device that analyzes a real-world object or environment to collect data on its shape and possibly its appearance (i.e. color). The collected data can then be used to construct digital, [[three dimensional model]]s.
| |
| | |
| Many different technologies can be used to build these 3D scanning devices; each technology comes with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitized are still present, for example, optical technologies encounter many difficulties with shiny, mirroring or transparent objects. For example, [[Industrial computed tomography scanning]] can be used to construct digital 3D models, applying [[Non-destructive testing]].
| |
| | |
| Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games. Other common applications of this technology include [[industrial design]], [[orthotics]] and [[prosthetics]], [[reverse engineering]] and [[prototyping]], [[quality control]]/inspection and documentation of cultural artifacts.
| |
| | |
| == Functionality ==
| |
| [[File:PMS - 3D-skeniranje okostja brazdastega kita Lenore (Magelan, 2013-08-05).jpg|thumb|3D scanning of a [[fin whale]] skeleton in the [[Natural History Museum of Slovenia]] (August 2013)]]
| |
| The purpose of a 3D scanner is usually to create a [[point cloud]] of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called [[3D reconstruction|reconstruction]]). If color information is collected at each point, then the colors on the surface of the subject can also be determined.
| |
| | |
| 3D scanners share several traits with cameras. Like cameras, they have a cone-like [[field of view]], and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects color information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.
| |
| | |
| For most situations, a single scan will not produce a complete model of the subject. Multiple scans, even hundreds, from many different directions are usually required to obtain information about all sides of the subject. These scans have to be brought into a common [[coordinate system|reference system]], a process that is usually called ''alignment'' or ''[[image registration|registration]]'', and then merged to create a complete model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline.<ref>{{cite journal |author=Fausto Bernardini, Holly E. Rushmeier |title=The 3D Model Acquisition Pipeline |journal=Comput. Graph. Forum |volume=21 |issue=2 |pages=149–172 |year=2002 |url=http://www1.cs.columbia.edu/~allen/PHOTOPAPERS/pipeline.fausto.pdf |format=pdf |doi=10.1111/1467-8659.00574}}</ref>
| |
| | |
| == Technology ==
| |
| | |
| There are a variety of technologies for digitally acquiring the shape of a 3D object. A well established classification<ref>{{cite journal |author=Brian Curless |title=From Range Scans to 3D Models |journal=ACM SIGGRAPH Computer Graphics |volume=33 |issue=4 |pages=38–41 |doi=10.1145/345370.345399 |date=November 2000}}</ref> divides them into two types: contact and non-contact 3D scanners. Non-contact 3D scanners can be further divided into two main categories, active scanners and passive scanners. There are a variety of technologies that fall under each of these categories.
| |
| | |
| === Contact ===
| |
| [[File:9.12.17 Coordinate measuring machine.png|thumb|right|240px|A coordinate measuring machine with rigid perpendicular arms.]]
| |
| Contact 3D scanners probe the subject through physical touch, while the object is in contact with or resting on a [[Flatness (manufacturing)|precision flat]] [[surface plate]], ground and polished to a specific maximum of surface roughness. Where the object to be scanned is not flat or can not rest stably on a flat surface, it is supported and held firmly in place by a [[Fixture (tool)|fixture]].
| |
| | |
| The scanner mechanism may have three different forms:
| |
| * A carriage system with rigid arms held tightly in perpendicular relationship and each axis gliding along a track. Such systems work best with flat profile shapes or simple convex curved surfaces.
| |
| * An articulated arm with rigid bones and high precision angular sensors. The location of the end of the arm involves complex math calculating the wrist rotation angle and hinge angle of each joint. This is ideal for probing into crevasses and interior spaces with a small mouth opening.
| |
| * A combination of both methods may be used, such as an articulated arm suspended from a traveling carriage, for mapping large objects with interior cavities or overlapping surfaces.
| |
| | |
| A '''CMM''' ([[coordinate measuring machine]]) is an example of a contact 3D scanner. It is used mostly in manufacturing and can be very precise. The disadvantage of CMMs though, is that it requires contact with the object being scanned. Thus, the act of scanning the object might modify or damage it. This fact is very significant when scanning delicate or valuable objects such as historical artifacts. The other disadvantage of CMMs is that they are relatively slow compared to the other scanning methods. Physically moving the arm that the probe is mounted on can be very slow and the fastest CMMs can only operate on a few hundred hertz. In contrast, an optical system like a laser scanner can operate from 10 to 500 kHz.
| |
| | |
| Other examples are the hand driven touch probes used to digitize clay models in computer animation industry.
| |
| | |
| === Non-contact active ===
| |
| | |
| Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment. Possible types of emissions used include light, [[Non-Contact Ultrasound|ultrasound]] or x-ray.
| |
| | |
| ==== Time-of-flight ====
| |
| [[File:Lidar P1270901.jpg|thumb|right|240px|This [[lidar]] scanner may be used to scan buildings, rock formations, etc., to produce a 3D model. The lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically. The laser beam is used to measure the distance to the first object on its path.]]
| |
| | |
| The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight [[laser rangefinder]]. The laser rangefinder finds the distance of a surface by timing the round-trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the [[speed of light]] <math>c</math> is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. If <math>t</math> is the round-trip time, then distance is equal to <math> \textstyle c \! \cdot \! t / 2</math>. The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the <math>t</math> time: 3.3 [[picosecond]]s (approx.) is the time taken for light to travel 1 millimeter.
| |
| | |
| The laser rangefinder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser rangefinder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second.
| |
| | |
| Time-of-flight devices are also available in a 2D configuration. This is referred to as a [[time-of-flight camera]].
| |
| | |
| ==== Triangulation ====
| |
| [[File:Laserprofilometer EN.svg|thumb|right|240px|Principle of a laser triangulation sensor. Two object positions are shown.]]
| |
| | |
| Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera's field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera's field of view. These three pieces of information fully determine the shape and size of the triangle and gives the location of the laser dot corner of the triangle. In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The [[National Research Council of Canada]] was among the first institutes to develop the triangulation based laser scanning technology in 1978.<ref>{{cite book |author=Roy Mayer |title=Scientific Canadian: Invention and Innovation From Canada's National Research Council |publisher=Raincoast Books |location=Vancouver |year=1999 |isbn=1-55192-266-5 |oclc=41347212 }}</ref>
| |
| | |
| ==== Strengths and weaknesses ====
| |
| | |
| ''Time-of-flight'' and ''triangulation'' range finders each have strengths and weaknesses that make them suitable for different situations. The advantage of '''time-of-flight''' range finders is that they are capable of operating over very long distances, on the order of kilometers. These scanners are thus suitable for scanning large structures like buildings or geographic features. The disadvantage of ''time-of-flight'' range finders is their accuracy. Due to the high speed of light, timing the round-trip time is difficult and the accuracy of the distance measurement is relatively low, on the order of millimeters.<br> '''Triangulation''' range finders are exactly the opposite. They have a limited range of some meters, but their accuracy is relatively high. The accuracy of ''triangulation'' range finders is on the order of tens of micrometers.
| |
| | |
| '''Time-of-flight''' scanners accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanners position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second.
| |
| | |
| At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some ''time-of-flight'' scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimize vibration. Using these scanners to scan objects in motion is very difficult.
| |
| | |
| Recently, there has been research on compensating for distortion from small amounts of vibration <ref>{{cite conference |author=François Blais, Michel Picard, Guy Godin |title=Accurate 3D acquisition of freely moving objects |booktitle=2nd International Symposium on 3D Data Processing, Visualization, and Transmission, 3DPVT 2004, Thessaloniki, Greece |pages=422–9 |publisher=IEEE Computer Society |date=6–9 September 2004 |location=Los Alamitos, CA |isbn=0-7695-2223-8 }}</ref> and distortions due to motion and/or rotation. <ref>{{cite journal |author=Salil Goel, Bharat Lohani |title=A Motion Correction Technique for Laser Scanning of Moving Objects |journal=IEEE Geoscience and Remote Sensing Letters |pages=225-228 |year=2014 |url=http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6522133}}</ref>
| |
| | |
| When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process.
| |
| | |
| ==== Conoscopic holography ====
| |
| | |
| In a [[Conoscopy|conoscopic]] system, a laser beam is projected onto the surface and then the immediate reflection along the same ray-path are put through a conoscopic crystal and projected onto a CCD. The result is a [[diffraction pattern]], that can be [[frequency analysis|frequency analyzed]] to determine the distance to the measured surface. The main advantage with conoscopic holography is that only a single ray-path is needed for measuring, thus giving an opportunity to measure for instance the depth of a finely drilled hole.
| |
| | |
| ==== Hand-held laser scanners ====
| |
| | |
| Hand-held laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a hand-held device and a sensor (typically a [[charge-coupled device]] or [[position sensitive device]]) measures the distance to the surface. Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined. The position can be determined by the scanner using reference features on the surface being scanned (typically adhesive reflective tabs, but natural features have been also used in research work <ref>{{cite conference |author= K. H. Strobl, E. Mair, T. Bodenmüller, S. Kielhöfer, W. Sepp, M. Suppa, D. Burschka, G. Hirzinger|title=The Self-Referenced DLR 3D-Modeler |booktitle=Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA |pages=21–28 |year=2009 |url=http://www.robotic.dlr.de/fileadmin/robotic/stroblk/publications/strobl_2009iros.pdf |format=PDF}}</ref><ref>{{cite conference |author= K. H. Strobl, E. Mair, G. Hirzinger|title=Image-Based Pose Estimation for 3-D Modeling in Rapid, Hand-Held Motion |booktitle=Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China |pages=2593–2600 |year=2011 |url=http://www.robotic.dlr.de/fileadmin/robotic/stroblk/publications/strobl_2011icra.pdf |format=PDF}}</ref>) or by using an external tracking method. External tracking often takes the form of a [[laser tracker]] (to provide the sensor position) with integrated camera (to determine the orientation of the scanner) or a [[photogrammetric]] solution using 3 or more cameras providing the complete [[Six degrees of freedom]] of the scanner. Both techniques tend to use [[infrared]] [[Light-emitting diode]]s attached to the scanner which are seen by the camera(s) through filters providing resilience to ambient lighting.
| |
| | |
| Data is collected by a computer and recorded as data points within [[Three-dimensional space]], with processing this can be converted into a triangulated mesh and then a [[Computer-aided design]] model, often as [[Nonuniform rational B-spline]] surfaces. Hand-held laser scanners can combine this data with passive, visible-light sensors—which capture surface textures and colors—to build (or "[[reverse engineer]]") a full 3D model.
| |
| | |
| ==== Structured light ====
| |
| {{Main|Structured-light 3D scanner}}
| |
| | |
| Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an [[LCD projector]] or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.
| |
| | |
| Structured-light scanning is still a very active area of research with many research papers published each year. Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction.[24] [See [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=667888&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel4%2F34%2F14695%2F00667888.pdf%3Farnumber%3D667888 Morano, R., et al. "Structured Light Using Pseudorandom Codes,"] ''IEEE Transactions on Pattern Analysis and Machine Intelligence''.
| |
| | |
| The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second generates profiles that are exponentially more precise than laser triangulation. This reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time. VisionMaster creates a 3D scanning system with a 5-megapixel camera – 5 million data points are acquired in every frame.
| |
| | |
| A real-time scanner using digital fringe projection and phase-shifting technique (a various structured light method) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second.<ref>{{cite journal |author=Song Zhang, Peisen Huang |title=High-resolution, real-time 3-D shape measurement |journal=Optical Engineering |pages=123601 |year=2006 |url=http://spiedigitallibrary.org/oe/resource/1/opegar/v45/i12/p123601_s1}}</ref> Recently, another scanner is developed. Different patterns can be applied to this system. The frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands.<ref>{{cite journal |author=Kai Liu, Yongchang Wang, Daniel L. Lau, Qi Hao, Laurence G. Hassebrook |title=Dual-frequency pattern scheme for high-speed 3-D shape measurement |journal=Optics Express |volume=18 |issue=5|pages=5229–5244|year=2010 |url=http://www.opticsinfobase.org/view_article.cfm?gotourl=http%3A%2F%2Fwww%2Eopticsinfobase%2Eorg%2FDirectPDFAccess%2FCD9CB9B8%2DBDB9%2D137E%2DCE20F89EB19DA72A%5F196198%2Epdf%3Fda%3D1%26id%3D196198%26seq%3D0%26mobile%3Dno&org= |format=PDF |pmid=20389536 |doi=10.1364/OE.18.005229}}</ref> By utilizing the binary defocusing technique, speed breakthroughs have been made that could reach hundreds of <ref>{{cite journal |author=Song Zhang, Daniel van der Weide, and James H. Oliver |title=Superfast phase-shifting method for 3-D shape measurement|journal=Optics Express |pages=9684–9689 |year=2010 |url=http://www.opticsinfobase.org/abstract.cfm?uri=oe-18-9-9684}}</ref> to thousands of frames per second.<ref>{{cite journal |author=Yajun Wang and Song Zhang|title=Superfast multifrequency phase-shifting technique with optimal pulse width modulation|journal=Optics Express |pages=9684–9689 |year=2011 |url=http://www.opticsinfobase.org/abstract.cfm?uri=oe-19-6-5149}}</ref>
| |
| | |
| ==== Modulated light ====
| |
| | |
| Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a [[sinusoidal]] pattern. A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light traveled. Modulated light also allows the scanner to ignore light from sources other than a laser, so there is no interference.
| |
| | |
| ==== Volumetric techniques ====
| |
| | |
| ===== Medical =====
| |
| | |
| [[Computed tomography]] (CT) is a medical imaging method which generates a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images, similarly [[Magnetic resonance imaging]] is another a medical imaging technique that provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. These techniques produce a [[voxel|discrete 3D volumetric representation]] that can be directly [[Volume rendering|visualized]], manipulated or converted to traditional 3D surface by mean of [[marching cubes|isosurface extraction algorithms]].
| |
| | |
| ===== Industrial =====
| |
| | |
| Although most common in medicine, Computed tomography, [[Industrial CT Scanning|Microtomography]] and MRI are also used in other fields for acquiring a digital representation of an object and its interior, such as nondestructive materials testing, [[reverse engineering]], or the study biological and paleontological specimens.
| |
| | |
| === Non-contact passive ===
| |
| | |
| Passive scanners do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most scanners of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infrared could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware but simple digital cameras.
| |
| | |
| * ''Stereoscopic'' systems usually employ two video cameras, slightly apart, looking at the same scene. By analyzing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on the same principles driving human [[stereoscopic vision]][http://www.cogs.susx.ac.uk/users/davidy/teachvision/vision5.html].
| |
| * ''[[Photometric Stereo|Photometric]]'' systems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel.
| |
| * ''Silhouette'' techniques use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These [[silhouette]]s are extruded and intersected to form the [[visual hull]] approximation of the object. With these approaches some concavities of an object (like the interior of a bowl) cannot be detected.
| |
| | |
| ==== User assisted (image-based modeling) ====
| |
| {{expand section|date=October 2010}}
| |
| | |
| There are other methods that, based on the user assisted detection and identification of some features and shapes on a set of different pictures of an object are able to build an approximation of the object itself. This kind of techniques are useful to build fast approximation of simple shaped objects like buildings. Various commercial packages are available like [[D-Sculptor]], [[iModeller]], [[Autodesk ImageModeler]], [[123DCatch]] or [[PhotoModeler]].
| |
| | |
| This sort of 3D scanning is based on the principles of [[photogrammetry]]. It is also somewhat similar in methodology to [[panoramic photography]], except that the photos are taken of one object on a three-dimensional space in order to replicate it instead of taking a series of photos from one point in a three-dimensional space in order to replicate the surrounding environment.
| |
| | |
| == Reconstruction ==
| |
| | |
| === From point clouds ===
| |
| | |
| The [[point cloud]]s produced by 3D scanners can be used directly for measurement and visualization in the architecture and construction world.
| |
| | |
| Most applications, however, use instead polygonal 3D models, [[NURBS]] surface models, or editable feature-based CAD models (aka [[Solid modeling|Solid models]]).
| |
| | |
| * [[Polygon mesh]] models: In a polygonal representation of a shape, a curved surface is modeled as many small faceted flat surfaces (think of a sphere modeled as a disco ball). Polygon models—also called Mesh models, are useful for visualization, for some [[Computer-aided manufacturing|CAM]] (i.e., machining), but are generally "heavy" ( i.e., very large data sets), and are relatively un-editable in this form. Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface. Many applications, both free and nonfree, are available for this purpose (e.g. [[MeshLab]], PointCab, kubit PointCloud for AutoCAD, JRC 3D Reconstructor, imagemodel, PolyWorks, Rapidform, [[Geomagic]], Imageware, [[Rhino 3D]] etc.).
| |
| * [[Freeform surface modelling|Surface models]]: The next level of sophistication in modeling involves using a quilt of ''curved'' surface patches to model our shape. These might be NURBS, TSplines or other curved representations of curved topology. Using NURBS, our sphere is a true mathematical sphere. Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout. These patches have the advantage of being lighter and more manipulable when exported to CAD. Surface models are somewhat editable, but only in a sculptural sense of pushing and pulling to deform the surface. This representation lends itself well to modeling organic and artistic shapes. Providers of surface modelers include Rapidform, [[Geomagic]], [[Rhino 3D]], Maya, T Splines etc.
| |
| * [[Solid modeling|Solid CAD models]]: From an engineering/manufacturing perspective, the ultimate representation of a digitized shape is the editable, parametric CAD model. After all, CAD is the common "language" of industry to describe, edit and maintain the shape of the enterprise's assets. In CAD, our sphere is described by parametric features which are easily edited by changing a value (e.g., centerpoint and radius).
| |
| | |
| These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the center of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model.
| |
| | |
| Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD (e.g., [[Geomagic]], Imageware, [[Rhino 3D]]). Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact, yielding a complete, native CAD model, capturing both shape and design intent (e.g. [[Geomagic]], Rapidform). Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment (e.g., [[CATIA, AutoCAD, Revit]]).
| |
| | |
| === From a set of 2D slice ===
| |
| [[File:CT Scan of Dale Mahalko's brain-skull.jpg|thumb|right|240px|3D reconstruction of the brain and eyeballs from CT scanned DICOM images. In this image, areas with the density of bone or air were made transparent, and the slices stacked up in an approximate free-space alignment. The outer ring of material around the brain are the soft tissues of skin and muscle on the outside of the skull. A black box encloses the slices to provide the black background. Since these are simply 2D images stacked up, when viewed on edge the slices disappear since they have effectively zero thickness. Each DICOM scan represents about 5mm of material averaged into a thin slice.]]
| |
| | |
| [[X-ray computed tomography|CT]], [[industrial CT scanning|industrial CT]], [[MRI]], or [[x-ray microtomography|Micro-CT]] scanners do not produce point clouds but a set of 2D slices (each termed a "tomogram") which are then 'stacked together' to produce a 3D representation. There are several ways to do this depending on the output required:
| |
| | |
| * [[Volume rendering]]: Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various thresholds, allowing different colors to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object.
| |
| * [[Segmentation (image processing)|Image segmentation]]: Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation.
| |
| * [[Image-based meshing]]: When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data.
| |
| | |
| == Applications ==
| |
| | |
| === Material processing and production ===
| |
| {{Main|Laser scanning}}
| |
| | |
| ''[[Laser scanning]]'' describes the general method to sample or scan a surface using [[laser]] technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Low laser power is used when the scanned surface doesn't have to be influenced, e.g. when it only has to be digitized. [[Confocal]] or [[Three-dimensional space|3D]] laser scanning are methods to get information about the scanned surface. Another low-power application are structured light projection systems that are used for solar cell flatness metrology enabling stress calculation with throughout in excess of 2000 wafers per hour.<ref>{{cite journal |author=W. J. Walecki, F. Szondy, M. M. Hilali |title=Fast in-line surface topography metrology enabling stress calculation for solar cell manufacturing for throughput in excess of 2000 wafers per hour |journal=Meas. Sci. Technol. |volume=19 |issue=2 |pages=025302 |year=2008 |doi=10.1088/0957-0233/19/2/025302 }}</ref>
| |
| | |
| The laser power used for laser scanning equipment in industrial applications is typically less than 1W. The power level is usually on the order of 200 mW or less.
| |
| | |
| === Construction industry and civil engineering ===
| |
| | |
| * [[Robotic Control]]: e.g., a laser scanner may function as the "eye" of a robot.<ref>[http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V16-4JHMHY5-1&_user=4706269&_coverDate=06%2F30%2F2006&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1286558351&_rerunOrigin=google&_acct=C000064385&_version=1&_urlVersion=0&_userid=4706269&md5=fd1507fa46786cf3d2cb19252463f0f4 Motion control and data capturing for laser scanning with an industrial robot], Sören Larsson and J.A.P. Kjellander, Robotics and Autonomous Systems, Volume 54, Issue 6, 30 June 2006, Pages 453-460, {{doi|10.1016/j.robot.2006.02.002}}</ref><ref>[http://ceit.aut.ac.ir/~shiry/publications/Matthias-icmtpaper_fin.pdf Landmark detection by a rotary laser scanner for autonomous robot navigation in sewer pipes], Matthias Dorn et al., Proceedings of the ICMIT 2003, the second International Conference on Mechatronics and Information Technology, pp. 600- 604, Jecheon, Korea, Dec. 2003</ref>
| |
| * As-built drawings of Bridges, Industrial Plants, and Monuments
| |
| * Documentation of historical sites
| |
| * Site modeling and lay outing
| |
| * Quality control
| |
| * Quantity Surveys
| |
| * Freeway Redesign
| |
| * Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire.
| |
| * Create GIS (Geographic information system) maps and [[Geomatics]].
| |
| * Subsurface Laser Scanning in mines and Karst voids.<ref>{{cite web|last=Murphy|first=Liam|title=Case Study: Old Mine Workings|url=http://subsurfacelaserscanning.com/portfolio_1/case-study-old-mine-workings/|work=Subsurface Laser Scanning Case Studies|publisher=Liam Murphy|accessdate=11 January 2012}}</ref>
| |
| * Forensic Documentation <ref>http://www.leica-geosystems.us/forensic/</ref>
| |
| | |
| === Benefits of 3D scanning ===
| |
| | |
| 3D model scanning could benefit the design process if:
| |
| | |
| * Increase effectiveness working with complex parts and shapes.
| |
| * Help with design of products to accommodate someone else's part.
| |
| * If CAD models are outdated, a 3D scan will provide an updated version
| |
| * Replacement of missing or older parts
| |
| | |
| === Entertainment ===
| |
| | |
| 3D scanners are used by the [[entertainment industry]] to create digital 3D models for [[movie]]s, [[video game]]s and leisure purposes. They are heavily utilized in [[virtual cinematography]]. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer.
| |
| | |
| === Reverse engineering ===
| |
| | |
| [[Reverse engineering]] of a mechanical component requires a precise digital model of the objects to be reproduced. Rather than a set of points a precise digital model can be represented by a [[polygon mesh]], a set of flat or curved [[NURBS]] surfaces, or ideally for mechanical components, a CAD solid model. A 3D scanner can be used to digitize free-form or gradually changing shaped components as well as prismatic geometries whereas a [[coordinate measuring machine]] is usually used only to determine simple dimensions of a highly prismatic model. These data points are then processed to create a usable digital model, usually using specialized reverse engineering software.
| |
| | |
| === Cultural heritage ===
| |
| | |
| There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes.
| |
| | |
| The combined use of [[3D scanning]] and [[3D printing]] technologies allows the replication of real objects without the use of traditional [[plaster cast]]ing techniques, that in many cases can be too [[wikt:invasive|invasive]] for being performed on precious or delicate cultural heritage artifacts.<ref>{{cite journal |author=Paolo Cignoni, Roberto Scopigno |title=Sampled 3D models for CH applications: A viable and enabling new medium or just a technological exercise?|journal=ACM Journal on Computing and Cultural Heritage |volume=1 | issue=1 |pages=1 |doi=10.1145/1367080.1367082| url=http://vcg.isti.cnr.it/Publications/2008/CS08/ | format=PDF |date=June 2008}}</ref> In the side figure the [[gargoyle]] model on the left was digitally acquired by using a 3D scanner and the produced 3D data was processed using [[MeshLab]]. The resulting digital [[3D model]] was used by a [[rapid prototyping]] machine to create a real resin replica of original object.
| |
| | |
| ==== Michelangelo ====
| |
| | |
| In 1999, two different research groups started scanning Michelangelo's statues. [[Stanford University]] with a group led by [[Marc Levoy]]<ref>{{cite conference |author=Marc Levoy, Kari Pulli, Brian Curless, Szymon Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginzton, Sean Anderson, James Davis, Jeremy Ginsberg, Jonathan Shade, Duane Fulk |title=The Digital Michelangelo Project: 3D Scanning of Large Statues |booktitle=Proceedings of the 27th annual conference on Computer graphics and interactive techniques |pages=131–144 |year=2000 |url=http://graphics.stanford.edu/papers/dmich-sig00/ |format=PDF}}</ref> used a custom laser triangulation scanner built by [[Cyberware (company)|Cyberware]] to scan Michelangelo's statues in Florence, notably the [[David (Michelangelo)|David]], the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo's chisel marks. These detailed scans produced a huge amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from [[IBM]], led by H. Rushmeier and F. Bernardini scanned the [[The Deposition (Michelangelo)|Pietà of Florence]] acquiring both geometric and color details. The digital model, result of the Stanford scanning campaign, was thoroughly used in the 2004 subsequent restoration of the statue.<ref>{{cite book |author=Roberto Scopigno; Susanna Bracci; Falletti, Franca; Mauro Matteini |title=Exploring David. Diagnostic Tests and State of Conservation. |publisher=Gruppo Editoriale Giunti |year=2004 |isbn=88-09-03325-6 }}</ref>
| |
| | |
| ==== Monticello ====
| |
| | |
| In 2002, David Luebke, et al. scanned Thomas Jefferson's Monticello.<ref>{{cite web |author=David Luebke, Christopher Lutz, Rui Wang, Cliff Woolley |title=Scanning Monticello |year=2002 |url=http://www.cs.virginia.edu/Monticello}}</ref> A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with color data from digital photographs to create the Virtual Monticello, and the Jefferson's Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson's Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarized projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson's Library. The Jefferson's Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson's Cabinet.
| |
| | |
| ==== Cuneiform tablets ====
| |
| | |
| In 2003, Subodh Kumar, et al. undertook the 3D scanning of ancient cuneiform tablets.<ref>{{cite conference |author=Subodh Kumar, Dean Snyder, Donald Duncan, Jonathan Cohen, Jerry Cooper |title=Digital Preservation of Ancient Cuneiform Tablets Using 3D-Scanning |booktitle=4th International Conference on 3-D Digital Imaging and Modeling : 3DIM 2003, Banff, Alberta, Canada |pages=326–333 |publisher=IEEE Computer Society |date=6–10 October 2003 |location=Los Alamitos, CA }}</ref> Again, a laser triangulation scanner was used. The tablets were scanned on a regular grid pattern at a resolution of {{convert|0.025|mm|abbr=on}}.
| |
| | |
| ==== Kasubi Tombs ====
| |
| A 2009 [[CyArk]] 3D scanning project at Uganda's historic [[Kasubi Tombs]], a [[UNESCO World Heritage Site]], using a Leica HDS 4500, produced detailed architectural models of Muzibu Azaala Mpanga, the main building at the complex and tomb of the [[Kabaka of Buganda|Kabakas]] (Kings) of Uganda. A fire on March 16, 2010, burned down much of the Muzibu Azaala Mpanga structure, and reconstruction work is likely to lean heavily upon the dataset produced by the 3D scan mission.<ref>{{cite news|author=Scott Cedarleaf (2010)|url=http://archive.cyark.org/royal-kasubi-tombs-destroyed-in-fire-blog|title=Royal Kasubi Tombs Destroyed in Fire|publisher=[[CyArk]] Blog}}</ref>
| |
| | |
| ==== "Plastico di Roma antica" ====
| |
| | |
| In 2005, Gabriele Guidi, et al. scanned the "Plastico di Roma antica",<ref>{{cite conference |author=Gabriele Guidi, Laura Micoli, Michele Russo, Bernard Frischer, Monica De Simone, Alessandro Spinetti, Luca Carosso |title=3D digitization of a large model of imperial Rome |booktitle=5th international conference on 3-D digital imaging and modeling : 3DIM 2005, Ottawa, Ontario, Canada |pages=565–572 |publisher=IEEE Computer Society |date=13–16 June 2005 |location=Los Alamitos, CA |isbn=0-7695-2327-7 }}</ref> a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model.
| |
| | |
| ==== Other projects ====
| |
| The 3D Encounters Project at the [[Petrie Museum of Egyptian Archaeology]] aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts, [[English Heritage]] has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data, and the [[National Conservation Centre]] in Liverpool has also produced 3D laser scans on commission, including portable object and in situ scans of archaeological sites.<ref>{{cite journal |url=http://dx.doi.org/10.5334/jcms.1021201 |journal=Journal of Conservation and Museum Studies |title=Imaging Techniques in Conservation |year=2012 |pages=17–29 |publisher=[[Ubiquity Press]] |author=Payne, Emma Marie |doi=10.5334/jcms.1021201|accessdate=17 March 2013}}</ref>
| |
| | |
| === Medical CAD/CAM ===
| |
| | |
| 3D scanners are used in order to capture the 3D shape of a patient in [[orthotics]] and [[dentistry]]. It gradually supplants tedious plaster cast. CAD/CAM software are then used to design and manufacture the [[orthosis]], [[prosthesis]] or [[dental implants]].
| |
| | |
| Many Chairside dental CAD/CAM systems and Dental Laboratory CAD/CAM systems use 3D Scanner technologies to capture the 3D surface of a dental preparation (either ''in vivo'' or ''in vitro''), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation ''in vivo'' and produce the restoration (such as a Crown, Onlay, Inlay or Veneer).
| |
| | |
| === Quality assurance and industrial metrology ===
| |
| | |
| The digitalization of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (Computer Aided Design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably.
| |
| | |
| Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitized as well. For this purpose, 3D scanners are applied to generate point samples from the object's surface which are finally compared against the nominal data.<ref>{{cite thesis |degree=PhD |title=Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners |author=Christian Teutsch |year=2007 }}</ref>
| |
| | |
| The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterns on molds and tooling, determining accuracy of final build, analyzing gap and flush, or analyzing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option.
| |
| | |
| == See also ==
| |
| {{Portal|Design}}
| |
| | |
| * [[3D Printing]]
| |
| * [[3D reconstruction]]
| |
| * [[3D computer graphics software]]
| |
| * [[Angle-sensitive pixel]]
| |
| * [[Depth map]]
| |
| * [[Epipolar geometry]]
| |
| * [[Light-field camera]]
| |
| * [[Photogrammetry]]
| |
| * [[Range imaging]]
| |
| * [[Structured-light 3D scanner]]
| |
| | |
| == Notes ==
| |
| {{reflist|30em}}
| |
| | |
| == References ==
| |
| | |
| {{refbegin}}
| |
| * {{cite conference |author=Katsushi Lkeuchi |title=Modeling from Reality |booktitle=3rd International Conference on 3-D Digital Imaging and Modeling : proceedings, Quebec City, Canada |pages=117–124 |publisher=IEEE Computer Society |date=28 May–1 June 2001 |location=Los Alamitos, CA |isbn=0-7695-0984-3 }}
| |
| {{refend}}
| |
| * Raymond A. Morano, Cengizhan Ozturk, Robert Conn, Stephen Dubin, Stanley Zietz, Jonathan Nissanov " Structured Light Using Pseudorandom Codes" IEEE Transactions on Pattern Analysis and Machine Intelligence - TPAMI, vol. 20, no. 3, pp. 322–327, 1998
| |
| | |
| == External links ==
| |
| {{commons category|3D scanning}}
| |
| * [http://www.cs.cmu.edu/~seitz/course/Sigg00/notes.html 3D Photography Course Notes]
| |
| * [http://www.vision.caltech.edu/bouguetj/ICCV98/ 3D Photography on your desk]: development of a simple and inexpensive method for extracting the three-dimensional shape of objects
| |
| | |
| {{Forestry tools}}
| |
| | |
| [[Category:Computing input devices]]
| |
| [[Category:3D imaging|Scanner]]
| |
| [[Category:Industrial design]]
| |
| [[Category:Laser image acquisition]]
| |
| [[Category:Image scanners]]
| |