A4Vision provides the technology required for an end-to-end biometric system as shown in Figure 1. This consists of components for:

a) the acquisition of the 3D data;
b) data processing where the 3D surface is reconstructed for further recognition;
c) creation of the biometric template from the extracted feature and
d) the eventual matching (recognition) based on a comparison of acquired and previously enrolled biometric templates.


Figure 1 - A4Vision Core Technology development. The hardware and grey boxes are A4Vision proprietary. Face Capturing

A4Vision's proprietary hardware for face capturing - or the acquisition of facial data - works on the principle of structured or coded lighting. The essence of structured lighting consists in projecting a pattern of known space structure at the subject's face. The structured light is distorted by the individual facial geometry, and these distortions are unambiguously defined by the form of the scanned surface. Having defined compatibility between elements of the initial and determined structure of the coded light beam, by means of reconstruction algorithms, it is possible to precisely restore the geometry of the registered surface.

Face capturing refers to the moment when the camera and the special light take a "picture" of the target. This module includes the software necessary to automate the acquisition process by mean of PCs. The software controls the hardware functionality and synchronizes all the necessary steps of the acquisition process.

A simplified scheme on how the capturing works is represented in the following figure:


Figure 2 - The digitizing equipment. (A) The special projector shoots a structured light (a pattern) onto the face; (B) The pattern is then distorted by the face's surface feature; (C) The camera records the face and the distorted pattern that contain the key information needed to reconstruct the 3 coordinates of all points belonging to the face's surface.

3D Reconstruction

The second step is the reconstruction of the 3D surface illustrated in Figure 3 below. This module uses a set of proprietary algorithms, designed for surface reconstruction and optimization, based on data received from the camera. After receiving raw data (the distorted pattern on the target object), the 3D Reconstruction algorithms perform image filtering (noise reduction), and then instantly reconstructs the 3D surface, smoothing and interpolating data to avoid holes and optimizing the mesh.

The algorithm has to recognize the pattern projected onto the surface and calculate, by means of triangulations, all three coordinates of the sampled points on the surface. This will result in the surface described in the form of a cloud of points. After this step, the system will interpolate all the points by mean of a mesh.

Next, if the color surface was captured by an A4Vision enrollment device, the surface can then be calculated and over-imposed onto the mesh. The texture can be overlapped (after an automatic adaptation) on the 3D surface. This stage is not relevant for devices using the 3D video unit, where the surface texture is not captured.

It is important to stress that the texture is NOT needed for recognition purposes. The output of this module is the optimized 3D surface or 3D mesh, suitable for further use in the recognition process.


Figure 3 - Flow scheme of the 3D reconstruction process.

 

Copyright © 2006 A4Vision, Inc. All Rights Reserved.