What is 3D camera
Date:2019-09-18
Every day we see the world in three dimensions, our brains can identify people, animals and objects and figure out how big and far away they are. But the image captured by regular cameras are 2-dimensional, with geometric information lost, such as size, volume and distance, because the cameras can’t function like human eyes.
How do we perceive depth?
Before we talk about the features and functions of 3D camera, we need to brief on how we humans see the world in three dimensions. Humans and most animals have two eyes to capture image, which is dubbed binocular vision. Our two eyes are at two sides of our head, so we see one object from two slightly different angles when the image is projected to the retina of each eye. The angle difference results in horizontal position differences of the two images captured by left and right eye, which is known as binocular disparities. We perceive depth when the disparities are processed by the brain.
What are 3D cameras?
Although binocular disparities that enables us to capture 3-dimensional image is a natural phenomenon when we are viewing 3-dimensional scenes, they can also be simulated and applied to machines like cameras. Therefore, the simplest application of 3D camera is based on binocular stereo vision.
Most often we see 3D cameras with two lenses, imitating human eyes to capture images from different vantages. There are also some 3D cameras with more than two lenses or with a single lens that can shift its position, so as to acquire depth data. Better than traditional 2D cameras, which only record the objects in a scene, 3D cameras are able to acquire the distance between the object and the camera, and thus get the 3-dimensional coordinates of the object. As technology develops, there are other ways to capture 3D data, the most commonly researched are ToF (Time of Flight) and structured light.
ToF camera works by emitting pulsed infrared light onto the object, and the light bounces back to the sensor, then the camera measures the time it takes for light to “flying” from the camera to the sensor. Another method is structured light, which uses a light source such as projector to project known pattern, often stripes, to the object, and the geometry of the light is deformed by the surface of the object. From the distortion, the depth data can be calculated.
As the 3D imaging technologies mature, 3D cameras will unleash unimaginable power. Obvious examples are the emerging IoT systems in our homes, exciting AR/VR games and more secure face recognition technologies. With “eyes” to perceive the world as we do, machines and computers will have a better understanding of the world around them, and even interact with us, intelligently.
Previous:The Era of 3D Machine Vision
Next:Why do Companies Turn to 3D Machine Vision Instead of 2D?
Hot news
-
1. Point cloud and 3D image 239
2. What is 3D camera 183
3. The Era of 3D Machine Vision 183
4. Why do Companies Turn to 3D Machine Vision Instead of 2D? 155
Latest News
-
1. Point cloud and 3D image 239
2. Why do Companies Turn to 3D Machine Vision Instead of 2D? 155
3. The Era of 3D Machine Vision 183
4. What is 3D camera 183