Camera Calibration is a prerequisite for extracting precise and reliable metric information during close-range photogrammetric measurement. Camera calibration algorithm using plane calibration sheets is used for obtaining the internal parameters of a camera by viewing a plane pattern with a known geometric structure and is based on the properties of the calibration sheet or plane by minimizing their algebraic distances. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. This paper gives an overview of camera calibration using plane calibration sheet.

Introduction

A camera which is an optical instrument for recording or capturing images has been around for a very long time. The first set of cameras was expensive and not everyone could afford them as a result people came up with pinhole cameras which were inexpensive and was used commonly in our everyday life but this also came with a price as most pinhole cameras have significant distortion. When a camera’s principal distance, lens distortion parameters, and principal point offset are known the camera is said to be calibrated. Camera Calibration Using Plane sheet requires metric information like coordinate, angle, length ratio, radius etc about the reference plane. Calibration sheet is cheap and can be produced easily depending on the required accuracy. Also since the calibration sheets are manmade, and their metric structure is known, it only requires knowing the homography matrices induced by world planes, whose estimations are much stable and accurate than those of inter-image transformations arising from projections of points. Once the internal parameters are recovered, the estimation of the relative position between planes and cameras can be achieved. The paper is organized into the following sections; the projection model, principle of plane-based calibration and three calibration algorithms commonly used in plane-based calibration.

The projection model

The process of transforming or mapping from world 3D point to 2D image point is done whenever an image is captured using a camera. This goes to show that every point in the object space is transformed to the image space or plane. The image plane is positioned in front of the optical center at a distance which is the origin of the camera (optical center). The object space depicts what we are trying to capture on the image plane while the image plane is what is obtained after capturing the object. The focal length is the distance between the optical center and image plane.

Figure 1: shows camera model

Point P (X, Y, Z) yields;

(1)

Image plane projection point is

(2)

The following process is used for modeling the projection process

The transformation from object plane 3D to Camera 3D: The location of point from the object coordinate system in the camera coordinate system can be specified by view transformation W

Where W =

Location of point P in 3D camera coordinate can be given as

= .

Simply put P’ = W. P.

Projection on to the normalized image plane: The projection from the 3d camera coordinate system to a continuous normalized 2D coordinate system on the image plane can be described as follows;

Step 1 finding the normalized projection of x

Step 2 transformations from normalized coordinate x (2D affine transformation) to camera coordinate u. The affine transformation helps to map the scale and skewing of the camera coordinate.

= hom-1 A .hom (x)=A’ . hom (x)

A = , hom-1 .

The W matrix captures the extrinsic parameters of the projection and A matrix captures the intrinsic properties of the camera.

Lens distortion: Cameras are made up of lenses and they introduce distortions which include decentering errors and radial distortion. The normalized 2D projection coordinates are prone to non linear radial distortion with respect to the optical center and is expressed by

r = with r and D been the lens-distorted 2d coordinates in the normalized image plane.

In summary the projection process can be summarized with the diagram below.

From right to left the point x in the camera plane (diagram c) is projected to the image plane to the normalized coordinate, in diagram b the lens distortion is mapped. The affine mapping specified by the intrinsic camera transformation (matrix A) finally yields the observed sensor image coordinates u = (u, v T in (a).

Principle of Plane-based camera calibration

Plane based calibration can be simply done through the determination of the image of the Absolute Conic (IAC) using plane homographies leading to a simple linear calibration equations. After scaling, the image Absolute Conic (IAC) takes this form:

The calibration constraints arising from homographies can be expressed and implemented in several ways:

The camera position t being unknown and the equation holding up to scale only, we can extract exactly two different equations in ! that prove to be homogeneous linear:

where hi is the ith column of H.

These are our basic calibration equations. If several calibration planes are available, we just include the new equations into a linear equation system. It does not matter if the planes are seen in the same view or in several views or if the same plane is seen in several views, provided the calibration is constant. The equation system is of the form Ax = 0, with the vector of unknowns x = (!11; !22; !13; !23; !33)T. Before solving the linear equation system, attention has to be paid to numerical conditioning. After having determined !, the intrinsic parameters are extracted via:

Furthermore the principle of plane-based camera calibration can be extended either by knowing beforehand intrinsic parameters or by applying using variable intrinsic parameters with calibrating cameras. When the plane-based camera calibration is extended with prior knowledge of the intrinsic parameter, this eliminates unknowns and helps to reduce the linear equation system.

Feature correspondences between planar objects and views. The plane features have to be given in a metric frame.

· Known values of intrinsic parameters. They may be provided for individual views and any of the parameters _; u0 or v0 independently.

· Flags indicating for individual intrinsic parameters if they stay constant or vary across different views.

The complete algorithm consists of the following steps:

A. Compute plane homographies from the given features.

B. Construct the equation matrix A

C. Ensure good numerical conditioning of A

D. Solve the equation system by any standard numerical method and extract the values of intrinsic parameters from the solution as shown in equations