Home > Computer science essays > Hand Image based Personal Authentication System

Essay: Hand Image based Personal Authentication System

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 26 minutes
  • Price: Free download
  • Published: 21 December 2016*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 7,537 (approx)
  • Number of pages: 31 (approx)

Text preview of this essay:

This page of the essay has 7,537 words.

Abstract. Hand geometry is widely accepted biometric modality for identification of human beings. This is considered as safest biometric indicator due to its strong resistance against the unauthorized access and easy to use modality from the user point of view. This chapter presents an approach for the personal authentication using geometrical structure of hand images. The proposed approach consists of many phases like acquisition of hand images of the user to the system, normalization of images, normalized contour and palm region extraction etc. The hand image contour from Region of interest is computed for structural features extraction, which represents the shape information of hand image. This features information of the test and the trainee images are compared/matched using machine learning based techniques for verification.
Keywords. Hand Geometry, finger width, Support Vector Machine, feature extraction.
Introduction
The major objective of biometrics recognition is to provide automatic discrimination between subjects in a reliable way based on one or more physiological and behavioral traits. All these personal traits are commonly called as biometrics.
Biometrics is thus defined as the process of individual identification based on his/her distinguished characteristics. More precisely the biometrics is the science of identifying or verifying an individual identity based on his/her behavioral/physiological characteristics. Physiological characteristics comprise of fingerprints, facial features, hand geometry, iris, and retinal scan; and behavioral characteristic determine behavior of an individual from keystroke style, voice print and signature style. The important qualities of a good biometric indicators are:
Uniqueness: Features should be as unique as possible, Which means that the features of individual must be different
Universality: The related biometric features should be present in all the persons enrolled.
Permanence: They should not vary with time.
Measurability: Features are measurable using very simple technique.
Any biometrics system of individual authentication works for Identification or Verification modules. In identification module the person presents his/her biometric characteristics and the biometrics system associates an identity to the person. On the other hand, in verification mode biometrics features of an individual are matched against the claimed identity to verify the individual. The system captures individuals’ biometrics information, extracts features, and then match the learned features with already enrolled features for that subject’s ID. Biometrics system then accepts or rejects the claim of an individual based on the matching score.
Performance of Biometric system
The features of the persons enrolled to the system are used for verification. The features of face, hand print, fingerprint or voice print are used to determine the individuality of a person. These features of an unknown person need not be same as those computed during the registration process. The changes in the features are due to the environment and internal effect. This will result in mismatch of the test features (unknown person) and training features (enrolled or registered person). A suitable matching algorithm is required to eliminate this mismatch.
The important characteristics of matching algorithms are to present a closest associations among feature vectors to take the decision of match/non-match. Sometime system may also give a closest match for the individual, who have not even enrolled in the system. This problem might be handle by carefully choosing the threshold value. Matching results above a certain threshold are identified as true match and the others are rejected.
Biometric system’s performance is measured using two types of errors parameters. These are:
False acceptance rate (FAR): It is the ratio of total number of false claims are accepted to the total samples in the database.
“FAR= ” (Total number of false positive)/(Actual number of sample in test set ) (1.1)
FRR=(Total number of false negative)/(Actual number of sample in test set ) ” 100 % (1.2)
False rejection rate (FRR): It is the ratio of total number of true claims are rejected to the total samples in the database.
.
Equal error rate (EER): generalized measure of error and also called as crossover error rate (CER). EER is the value where FAR and FRR are equal. False acceptance causes a more serious problem than false rejection. That’s why the biometric systems should have minimal FAR. It can be achieved by taking a high threshold value so that only closest the matches are accepted and all others are rejected.
FRR is controlled using threshold chosen. FRR increases as the threshold increases. Owing to a high threshold, matches which are correct but less than the threshold due to noise or any other factors will not be accepted. For this reason EER serves our purpose. The receiver operating characteristic (ROC) is used to show the overall performance of a matcher/classifier and a varying threshold. Fig. 4.1 and Fig. 4.2 depict the EER and ROC respectively.
Fig. 1.1. Equal Error rate
Fig. 1.2. The Receiver Operating Characteristic (ROC)
Biometric Technologies
There are different types of biometric systems based on human body’s physical and behavioral characteristics.
Face recognition
It is an important biometric modality. Owing to the increased terrorist threats a number of airports around the globe employ face based biometric systems to check the people entering and leaving the country. Identification using face biometrics is very difficult as the changes to the face modalities due to age, makeup and facial expressions gives incorrect results. Two broad types of a face recognition systems acted under the uncontrolled and the controlled environment.
One of the good example of face recognition in an uncontrolled environment is passenger recognition on airport from the group of passengers without the consent or knowledge of the individual.
Fingerprints
Fingerprint already gained an acceptance as biometrics indicator in the world for almost all security purposes. Security agencies like FBI has fully accepted the use of fingerprints based human identification as the mean of law enforcement throughout the world. Fingerprint based identification methods utilized a mean of identity confirmation of an individual. Now a days security agencies are in the process of building electronically the fingerprint database, which can be utilized as a tool for identification. The majority of the fingerprint recognition systems [8] utilize minutiae of the ridges and bifurcation of the fingerprints as the key features for matching. Recently Image-based fingerprint recognition systems have also been proposed in the literature [29-35].
Signature Recognition
Signature Recognition is a behavioral biometric. Signatures recognition based identification or authentication has been utilizes in almost all domains since a very long time like banking, legal and government transactions. With the advancement of the technology automated signature recognition systems are in great demands to operate both in online and offline modes. Offline signature verification system works like a manual system, which only utilize the distinctiveness of the signature [9]. Online signature verification system in the other hand [15] extract features from signature print/image and utilize there features for verification/identification. The online signature feature vector include total time for signature, speed of writing, the pen pressure, number of pen up and pen downs inclination.
Handwriting Recognition
This is also a behavioral biometric and it is the process of interpreting and recognizing the handwritten inputs from various sources like written document, touchscreen etc. It is applied to validate forensic document for writer recognition. Individual identification using handwriting has not been scientifically proved, but numerous research in this area validate its significance [1]. Also the handwriting recognition systems works in two modes online [14] and offline [15] as proposed in the literature. The various features for handwriting recognition are pen strokes, crossed lines and loops might be used along with shape and size of letters
Ear Recognition
Ear recognition is comparatively a newly emerging biometric feature. Even though it was assumed that human ear structure is unique and distinctive among the population. The structure of the ear does not immune to makeup effects and changes in expression like face do. However the ear structure might only be affected by the presence of hair on the ear. The presence of lightening in surrounding may unfavorably affect the system. The other artifacts like ear is sheltered by hair or by hat then a thermogram of the ear which gives the temperature profile of the ear can be used for authentication as discussed in literature [5, 17].
Iris recognition
The colored part of pupil in our eye also holds substantial information and is found to be a useful biometric indicator. This traits is very effective among all the biometric traits it is perhaps the most effective one. The probability of two individuals having the same iris is very rare. This is particularly used by government agencies, as they requires highly security for access to the infrastructure and other secure system. This biometric system is highly depends on the user co-operation and a controlled environment.
Retina recognition
The retina based recognition is based on blood vessels seen in the white port of our eyes. There would be very infinitesimally small chances of two individuals with same pattern of retinal blood vessels. This is also observed that the retinal pattern of even twins is dissimilar. The demand of retinal based biometrics is arising rapidly as a highly secure environment. Again a controlled environment and user co-operation is solicited for its operation. To capture this biometric user has to point his/her eyes at a fixed distance against the camera, and look into the camera for a particular instant of time until retinal scan is not completed. The retinal patterns is remains invariants during the lifetime of the human being. This pattern can only be affected by the anomalies in human body like high blood pressure, diabetes.
Voice recognition
Voice pattern based recognition is most suitable in terms of user convenience as compared to other biometrics. This pattern is based on the analysis of pressure and wavelength patterns of air exhaled by the user [14]. This system needs voice patterns of the user to extract the features to take the decision of acceptance or rejection. Research until now in this area lacks of precision as the voice an individual also depends on circumstance and mimicry.
Gait recognition
It is the latest recognized biometric trait. It is based on analyzing the way an individual walks. The gait information is very helpful in identifying a person at a distance hence it is a good behavioral biometric [7]. This biometric is highly dependent on the mood of an individual and physical condition. Gait based biometrics characteristics are suffering from variation over a time unlike handprint, and fingerprint. A recognition rate of up to 95% have been achieved using gait biometrics [7].
Other biometric modalities have also been evolved in the recent time. A cancelable biometric approach called Bio-Hashing based authentication methods using electrocardiograph (ECG) features have also proposed in the literature recently [18 – 28].
Motivation
Hand based biometric authentication system is attractive in many security applications for a number of reasons. One of the most valid reason might be that almost all of the working population possess hands except the people with disabilities. Using hand geometry data for palm prints, fingerprints can also be collected along with hand geometry in one pass only. The latest biometric systems is based on the use of all the fingers of fingerprints for enhanced security. These biometrics features are also very useful in multi-modal biometric systems. The position of the hand of user is controlled by using pegs in most of the scanning. To overcome this restriction of placement of hand, we proposed an approach in which users are allowed to place his/her hand at any position on the scanner. Combining the other biometric indicators like fingerprints and palm prints to the current biometric systems is very easy because this uses a peg-free environment for acquisition of hand images.
These modality are very less vulnerable to environmental setting and other disturbances unlike face based recognition system, which is very sensitive to the problems like facial expression, lightening effect, makes-up etc.; retinal or Iris based recognition system also constrained by specific illumination; fingerprint systems need a clarity between ridges and valleys to have a highly frictional skin, etc., only 95% of the population may be able to get enrolled due one or other problems. Therefore, hand based identification can be a can be an attractive replacement of other biometrics modalities, as it is easy to interact, less costly and less data storage requirements.
Hand geometry based biometric systems are most among other biometric systems because of resistance against the fraud attacks, universality and their uniqueness. As these features are concealed inside the human body, it is not impossible to steal or loss and repudiation of geometrical information in the precise way. Hand geometry based systems are not affected by human body skin parameters i.e. color, moles, hair, etc,
In this chapter, the shape of the hand based biometric system is proposed for individual identification / verification. Although the commercialization of hand-based biometrics product are growing rapidly, still actual implementation and less available in the literature as in the case of other biometrics voice and face.
1.1.4 Chapter Overview
This chapter is organized as follows:
Section 1.2 gives a literature review on the use of hand geometry as a biometric. Different approaches, different feature sets and result obtained using these approaches are given. It also presents the hand geometry biometric system. Section 1.3 is about the preprocessing step. In preprocessing the major task is to normalize the hand image the normalization of the hand images includes image segmentation, ring effect removal, finding extremities etc. Section 1.4 includes all the details about the feature extraction. It lists out all the special features and their extraction from the hand image. Extraction of the features includes the width of fingers, length of the fingers and contour of fingers from a particular point. Section 1.5 discusses about the matching process and also the SVM classifier. Section 1.6 gives the conclusions and suggestions for future work.
Literature Review
A set of diverse approaches exists in the literature for hand geometry based recognition system. These approaches mostly vary in terms of extraction and manipulation of features of hand images.
B-spline curve are used to symbolize the fingers [1]. This eliminates the use of controlled environment for acquisition. These classifications are carried out by using the curves differences created by a range of hand geometries and the curves as a pattern for the person. Only fingers are represented by these curves except thumb. In literature [13] a recognition rate of 97% is achieved with six images captured from 20 persons. The error rate is 5% for the same database for verification.
In [2] the analysis of hand geometry is carried out by exploiting the implicit property of interpolation for polynomial 2D curves and 3D surfaces. In this process hand geometry is represented by implicit polynomial function. The coefficients of the polynomial function vary slightly in case of learning new data [3, 12]. A recognition rate of 95% is achieved using this approach in literature [16].
In [3] Authentication is done by analytics of vein patterns of the hand. By utilizing heat and conduction law a number of characteristic is extracted from each unique point of the vein pattern. By this method FAR is 3.5% and FRR is 1.5%.
The palm-print recognition based on the Eigen palm is discussed in [4]. The method utilizes the original palm prints and then palm prints are transformed to a set of characteristics. These characteristics are Eigen vectors computed from training data. For this, the Euclidian distance classifier is used. System works on a set of 200 people yielding FAR of 0.03% at FRR of 1%.
Bimodal systems, which are a grouping of two biometric modalities like hand geometry and palm prints are recognized as an effective tool for authentication. A bimodal system proposed in [5] utilizes hand shape and texture pattern of the hand images. Palm-print validation is carried out using DCT. Fusion of hand shape pattern and palm prints feature is carried out by product rule. The FRR and FAR are very high if these two modalities are used alone but it is significantly reduced on combining to form a bimodal system. Bimodal system are useful to handle the situation where either of the modalities are alike for different individual. The experiments were performed on database of 100 users. The results are significantly improved over standalone and gives FRR of 0.6% and FAR of 0.43%.
In [6] proposed a hand based recognition system by utilizing geometric classifiers. In [4] hand images are acquired using scanning the document carrying. Hand position on scanner related constraint were also imposed in this system. Overall a total of 30 diverse features were gathered from each hand image. The whole database is divided into training and testing sets. Training set comprises of three to five images per. A bounding box is found in the 30 dimensional feature spaces for the given training data. The measure of distance of these bounding boxes for the query images are utilized for estimation of resemblance.
A gesture based recognition system utilizing 3-D hand modelling is presented in [7]. A 2-D geometry of hand shape is approximated using splines. Propose model is very precise in most of the cases but inefficient as the parameters and the control points swell.
Overview of Proposed Biometric System
Fig. 1.3 presents a block diagram of the proposed hand-based biometric system. It consists of following two phases; enrollment and verification phase. The major process in this proposed method is preprocessing of input image, extraction of and feature matching. The major steps of the enrollment includes; preprocessing of input image, feature extraction and template generation from the extracted features. The most important part of any biometric system is preprocessing module is as it helps in efficient feature extraction. If preprocessing is not accurately carried out, the results might not be the same as expected. Feature extraction from the preprocessed image is the next step. In the proposed work, we measured length of 5 finger contours from pivot point to the tip of the finger, finger widths are measured from 3 positions for each finger. Next step is to generate biometric templates and stores them in database.
Last step of the system is a verification step in which the user credentials are verified against the stored feature template. The steps at verification phase are very similar to that of enrollment phase except the last step is matching the current template against the stored template in the database.
Fig. 1.3. A block diagram of a biometric system
Pre Processing
Pre-processing is very important step of every biometric/ image processing system. Preprocessing enables system to extract features/key points more accurately as compared to raw images. In this system we used a peg free environment during image acquisition step. In peg-based acquisition, users are restricted to place the hand at the fixed position of the sensor. On the other hand in peg-free environment there is no hand placement restriction. Fig. 1.4 shows the user’s image in different positions.
Fig. 1.4. Different hand positions
Normalization is an important step of preprocessing which includes::
Segmentation of Image
Important Points Detection
Contour Detection
Contour Normalization
Image Segmentation
In this process, foreground and background portions of the hand images are separated. This step is important as well as very easy to carry out. This includes separation of background and foreground and removal of other artifacts like ring effect.
Image Separation
For image segmentation, the main steps are:
Color to gray transformation
Gray- to binary transformation
Noise removal of binary image
At each pixel red, green and blue values are extracted using the thresholding operations to identify skin color for separation of hand image region from background. If I_r, I_g and I_b represent the intensity value of red, green and blue respectively. Following conditions are used for skin detection.
if’ (I’_r < I_g and I_g< I_b and I_r < 150)
Image (Channel 1) = 0, Image (Channel 2) = 0, Image (Channel 3) = 0
else
Image (Channel 1)= 255,Image (Channel 2)= 255, Image (Channel 3) =255
The acquired image is converted into the gray level image. Binarization is used get a binarized image using some threshold value. The original hand and segmented hand images are shown in Figure 1.5. First part of the figure shows original hand image and second part shows segmented portion. a small circle on the forefinger is observed because of the ring and this effect is called as ring effect which is explained in next section.
Fig. 1.5. (a) Original Image (b). Separated Image
Ring Effect Removal
The presence of ring in any finger causes a break, which cut that finger from the palm which are assumed as two separate parts. The major objective of this step is to detect and remove this break by applying appropriate technique. In this proposed work size of each component is calculated to identify the finger portion as the smaller part w.r.t. hand. This process comprises of following steps:
Step 1: Find the hand Contour.
Step 2: Find Hand extremities.
Step 3: Finger’s information Matrix.
Contour Extraction
m_(i,j)=”_((x,y)’object)”x^i y^i ‘ ‘ (1.3)
Inertia matrix are used to extract the contour. Major axis of palm is computed from the direction of highest Eigen-value of the inertia matrix. We compute the different moments of binary image using
Here x and y spans over all the pixels of an object. The coordinates of central points are computed as:
x ”= m_1,0/m_0,0 (1.4)
y ”= m_0,1/m_0,0 (1.5)
Finally the central moments are calculated as:
”_ij= ”'(x-x_i)’^i ‘(x-x_j)’^j (1.6)
The inertia matrix I is given by
[‘(”_2,0&”_1,1@”_1,1&”_0,2 )] (1.7)
” is the orientation of major axis and is computed as:
”=1/2 arctan(‘2’_1,1/(”_2,0-”_0,2 )) (1.8)
The reference point on wrist is identified as the point of intersection of palm boundary with the direction of pivot finger. First we found the first intersection point which is considered as the starting point in the contour, subsequently all contour points are located by nonzero neighbors to it towards clockwise direction. By following the approach, hand contour will be extraction.
Hand Extremity
These are basically the tips of fingers to the valleys of a hand image and their extraction is very important step in the feature extraction process. Valley between any two fingers is the lowest point between them. In similar way the extremities are detected using highest curvature on the tips of the fingers in hand contour. But there is always a possibilities of detecting false extremities as these are sensitive to irregularities in the contour like kinks and fake cavities. In work utilizes other approach that sketch a line along the radial distance from a reference point marked on the wrist. This point is considered as the initial point of the major axis along the direction of wrist line. The resulting series of extremities includes maxima and minima of the radial distances. These distances are robust to the irregularities of contour and kinks and also used to locate the nine extremities as shown in Fig. 1.6the detected maxima and minima on the contour on radial distances are used to gives the location of these nine extremities.
Fig. 1.6. Determination of Extremities
The first row presents extremities for thumb and others rows are ordered in clockwise for other fingers of the hand. This gives enough information about the hand geometry and is known as finger’s information matrix. This finger information matrix gives an idea to detach each ring cavity from the hand contour. Finger profiles (i.e. every finger has profiles on both sides, the left side of which is valley to tip, and tip to valley on right side) so that directional distance of the contour curve partitions w. r. t. the central major axis of the finger that helps to detect cavities. These cavities are correspond to the position at which the it falls below the 75% of the median then morphological bridging is applied by fitting a straight line on it, which is further used to remove the ring cavity.
We used the distance from contour to major axis of each fingers to identify the presence of thumb. A threshold value is used to compare left and right hand side distance around the finger for identification of the cavity and to remove the cavity a morphological open operation isperformed.
Fig. 1.7. Located Extremities and Valley
Image Normalization
Image normalization is a pixel based operation which decreases the variation among the pixel grey value and normally distributes the pixel intensity. At this step binary hand image obtained from segmentation step is normalized using adaptive threshold values. This normalization is required because, we used a peg-free system where users are allowed to put their hand in any direction, and thus we may observe the condition as shown in Fig. 1.8. Normalization of images is used for registration of hand images, mainly comprises of various operations like rotation followed by translation and then perform rotation to reorient them in a given directions.
The registration process of images is performed using the following steps:
Alignment of centroid.
Calculation of highest eigenvector using inertia matrix and then perform rotation in that direction.
Fig. 1.8. Different hand Contour Positions
Detection of Fingers
We take an extracted contour to detect fingers from the hand geometry images. For this we starts from the tips of the finger and move along the boundaries of all fingers and reach to the valley points. We then choose the least of these valleys and swung like a pendulum to mark the other points towards different valley. This mark is used as a reference point to find the finger length and other features.
Pivots of Finger
The length of each finger is extended to locate a point known as Pivots of finger. Using these finger pivots of each finger, we plot the hand pivotal axis as shown in Fig 1.6 and defined in the below step. Similar steps are used to find the pivot of thumb. In total we locate five such 5 pivots with respect to each finger.
Hand Pivotal Axis
The four pivot points with respect to each finger are then joined together by drawing a line between last two pivot points using least square estimation technique or by joining the last two pivot points to estimate the hand pivotal axis as shown in Fig. 1.9. This estimation of hand pivotal point is very significant to take care of the rotation.
Fig. 1.9. (a) Extracted finger (b) Pivotal Axis
Rotation of the finger
To handle the rotation of hand geometry we calculate the angle of rotation ”_i in respect to major axis of each hand using the procedure discussed above. We rotate each finger I by an angle ‘_i=”_i-”_i. In this ”_i is the required orientation of the finger. Rotation is then carried out by multiplying the image of each finger by the rotation matrix R_’ given as
R_’=[‘(cos”&- sin’@sin”&cos” )] (1.9)
Processing for the thumb
Processing of thumb orientation is highly complex process in comparison to the fingers as thumb involves the orientation w. r. to two diverse joints. Actually, these are metacarpal-phalanx joint along with the trapezium-metacarpal joint of the thumb which has to participate in the thumb orientation. To address the complication of orientation we perform a rotation and translation sequentially. These issues arise because of stretching of skin between the thumb and the index finger create uncertainty about the determination of valley points and detection of thumb becomes more difficult. This is the reason, why one should take care of the basic hand anatomy. A line is drawn from major axis of the thumb to the point on the line from the top of the thumb to 120 percent of the length of little finger marks the pivot of thumb. In order to translate the thumb. After this the thumb is translated to coincides with the tip of the hand pivot line, when swings to 90 degree in clock-wise direction. Finally the thumb is then oriented to the final angle of rotation and taken back to the initial position.
Normalized Image
The normalized hand image is translated to the position such that the centroid, is considered as the reference point on the surface of the image. We perform rotation to these hand images so that their pivot line matches exactly with the orientation. On the other hand the hands will be aligned with the major axis and its center would be defined w. r. to the contours of hand (not with respect to centroid of the pivotal).
Fig. 1.10. Normalized Fingers and Hand
Contour of Normalized Hand
After finding the normalized hand image which is shown in Fig. 1.10, we again find its contour as in Fig. 1.11.
Fig. 1.11. Contour of Normalized Hand
1.4 Feature Extraction
Set of features are extracted to form the feature vector. The steps described in section 1.3 are used preprocess the image for feature extraction. Extraction of reliable features is a very important module for any biometrics recognition system. Feature extraction module is used to extract features from the input image to form the template and store them to database. This module gives out the feature estimation like widths and lengths of each finger along with palm width etc.
The hand -based biometric identification system heavily relies on the invariance of human hand geometry. Typically these features includes length, width of the fingers, the aspect ratio of the fingers w.r.to palm, and the hand thickness, etc. Presently, the available commercial systems do not consider any non-geometric attributes like color of the skin of the hand into account.
Several other key points those can also be located from the palm-geometry. But we have only extracted those features which are stable and consistent, i.e. which are robust to the variation in hand position. In this work, we extracted the features like length of all fingers along with thumb, then width of all fingers at 3 different positions and 4 distances from a reference point on the palm to the top of every finger. The additional key points like width of the finger at three positions make this approach more efficient and robust. The length of the feature vector thus formed is 24 now.
In the Section 1.3, we have discussed about the hand extremities (five tips and four valley point), information matrix of finger, contour of finger and hand. Hand extremities are determined using radial distances from the reference point around the wrist. Fingers information matrix is a 5 ‘ 3 matrix representing the all information about the fingers. Second column of this information matrix is corresponds of the contour of finger tips. Whereas, first and third column indicates the left and right position of the valley surrounding to the particular tip.
Finger Length
In Section 1.3.1.2 we have discussed the normalization of hand image. Determination of finger length is done at the stage of normalization of hand image. In the present hand based biometric system, hand images are normalized by normalizing individual fingers and the palm region.
The process of detecting finger length start by cutting each finger from the palm. Two end point of the of finger’s contour segment are joined through a straight line is used to defined the cut. The information matrix gives an insight of valleys points of each finger. A binary line of zeros is drawn between two adjacent valley points differentiates the connected components. Now the fingers are cut by using the connected components algorithm, which uses the larger label to be the palm and the other one to be the finger being cut. The distances between the fingertip to its mean of starting and ending point provides the length of the finger. Same method is applied for every finger to determine their length.
Finger length extraction algorithm
Input: 5’3 information matrix, binarized hand image and its contour
Output: finger length in pixels
Locate the starting and ending point
Sketch a line between these points
Locate the mid-point of line.
Estimate distance from fingertip and the mid-point
Repeat above steps for each finger.
Finger Width
As discussed in the above normalization Section, we extract finger length from the fixed point of the palm. Accuracy of individual finger length is important in hand based biometric system. For finger width extraction, we used the finger contour and fingertip to separate it from others fingers. Using the fingertip points observed from the 2nd column of information matrix of each finger, locate the fingertip position on the contour. From this fingertip point, move 35 pixels down towards both sides by following the contour line of all finger and then calculate the distance between these two corresponding points, which gives the width of the finger at 35 pixels far from the fingertip. This step is repeated two more times for all the fingers. Generally, the length of finger is appx. 140 pixels. Moving by 35 pixels from finger-tip uniformly compute the width at three positions and these additional features make the proposed method more robust and efficient Fig. 1.12 describes the above approach.
Finger width extraction Algorithm
Input: 5’3 information matrix, binarized hand image and its contour
Output: width of finger at three different positions.
Fix fingertip as initial point
Move away 35 pixels to both directions from fingertips.
Locate two points on both direction, say A and B
Observe distance between them.
Calculate two next distances in the similar fashion.
Repeat Step 1 for every finger.
Fig. 1.12. Calculating finger width
Estimation of Spatial Distance
The location of pivots by extending the length of each finger towards palm as discussed in 1.3.1.2. A fixed point on palm at the hand at 50 pixels under the central finger is considered as pivot point. This is observed as a reference for computing all the features of hand image. These points are used to estimate the length of each fingers from tip of each finger to the reference.
Algorithm for Spatial distances
Input: 5’3 information matrix, binarized hand image and its contour
Output: 4 distances in spatial domain.
Take middle finger’s pivot as the initial point
Locate a reference point just 50 pixel under the central finger’s pivot location.
Compute the distance from fingertip of each finger to that reference point.
This algorithm returns 4 distances corresponds each finger. .
In all we obtain a feature vector of size 24 which consists of: length of 5 fingers finger 15 widths 3 for each finger, and 4 special distances (distance of fingertips to palm pivot position). Feature vector computed in this way is is shown in Fig. 1.13.
Fig. 1.13. All 24 features
1.5 Matching and Experimental Results
In this section, user authentication is carried out using the feature vector obtained from the normalized hand, which includes 24 distances as in Fig. 1.15.
In enrollment phase, features are extracted from preprocessed hand image and a template is generated and stored in the database. For matching feature template extracted from test image is matched with the templates stored in database. binary SVM is used for matching.
Algorithm for matching:
Step1. Take the input image.
Step2. Perform the preprocessing step using
Segmentation
Removal ring effect.
hand image normalization
Extremities location
Step3. Extraction of Features.
Step4. Classification using machine learning.
Training and Testing
A binary SVM classifier is used for matching the feature vector of test and trainee images. The system is first trained during enrollment phase by learning from the input data sets using SVM [15] classifier. Since SVM is a binary classifier, a multi class classifier required to be built for this system. For a given training data set, each data sample belongs to one of two classes i.e. match or non-match. These non-match samples are further classified using SVM and finally the samples are being into all classes as discussed in details in next paragraph. Fig. 1.14 shows support vectors built by SVM [16].
Fig. 1.14. Linear SVM; Curtsey [16]
Multiclass SVM
Multiclass SVM allocates assign input features to either of the class using the support vectors. Here the classes are taken from limited f element set. Here the multiclass problem is converted into multiple binary classification problems. This can be accomplished in two ways:
One-versus-all
One-versus-one
Categorization of new samples in the case of one-against-all is achieved using winner-takes-all approach, which is based on maximum likelihood of the output to a class. In the one-versus-one case, categorization is achieved by a max-win votes approach, in which every dataset is assigned to one of the two classes, the vote count of a class increases one on the assignment of each sample to the class, at the end a class having maximum of these count is identified as the class with respect to that sample.
In the proposed system we have used one-versus-all approach to classify between each class and all the remaining classes.
Identification Results
The database used in this study consists of ten right hand colored images of one hundred people captured at the at IIT-Delhi Biometric research lab. Size of captured image is 768 ‘ 576 pixel PNG file format.
This will create a database consists of 480 colored images of right hand for 48 users captured at the Biometric lab at IIT-Delhi. There is a variation among population in terms of their age, their gender, and skin. Also the task of data collection is carried out at different location time and different hand position. The peg free environment is used for all data capturing process. User are free to put their hand at any location (i.e. direction and position) on the scanner without fear or hesitation. The hand accessories like ring, bracelet, and watch is required to be removed by the user to make the system robust.
The experiments were started with enrollment and then verification by considering only two class classification problems. Each point in the sample is classified as belongs to only one set {p, n} of true and false class labels. This is mapping function which assigns a label to each sample from the testing data set as the predicted classes.
For a classifier and an instance, only four possible outcomes can be possible. For example an instance belongs to P class and is classified as positive, then it is counted as a true classification; if classified as negative, then it is counted as false negative. For a negative instance classified as negative, then it is counted as a true negative; if it is classified as positive, it is counted as a false positive.
The database used to test the proposed system is consists of 480 images. For experiments we select images randomly for training and test data set. The matching accuracy of 95.84 is obtained using proposed methods.
The FRR obtained from the proposed system is given as
FRR=(total number of false negative)/(actual number of sample in test set ) ” 100 % (1.10)
FRR= 6/144 ”100=4.16 %
Efficiency = 95.84%
FRR = 4.16%
The chapter has exploited the geometrical features only. Issues which will to be addressed further is representation of contour using some polynomial function like spline functions. A problem of variation in position of neighboring finger are highlighted. The applicability of polynomial function becomes more difficult because of contours variation. The easiest way is to use of spline functions for each finger separately rater using a polynomial curves to model the geometrical information of the hand.
Conclusions
The demand of intelligent biometrics based authentication system has grown at a very high pace in recent years because of availability of e-commerce applications requirement of secure and reliable method of biometric authentication. Recently hand geometry based authentication system has proven its reliability. The proposed work exploited the structure of the hand geometry for to extracting reliable features using simplest mean.
This proposed modality is not only user-friendly but also provides good results. The only disadvantage of the system is that this is affected by illumination. Peg free environment gives freedom to an individual to put their hand in any position and direction.
We obtain a feature vector of size 24 which consists of: length of 5 fingers finger 15 widths 3 for each finger, and 4 special distances (distance of fingertips to palm pivot position).
The proposed authentication system is tested on a database captured at IIT Delhi Biometrics lab for 48 user of different age, color and height. For each user we have collected 10 images for enrollment and verification task. The database is divided into training and test dataset in the ratio of 7:3. The matching is performed using SVM which results an accuracy about 95.84% on this database. The future direction to this research is to exploit more feature for robustness and exploit other classification algorithms also. To improve the performance.
References
Ma Y, Pollick F, Hewitt WT (2004, August) Using b-spline curves for hand recognition. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on (Vol. 3, pp. 274-277). IEEE.
Er”il A, Y”ld”z VT, K”rm”z”tas H, B”ke B (2001, June) Hand recognition using implicit polynomials and geometric features. In Audio-and Video-Based Biometric Person Authentication (pp. 336-341). Springer Berlin Heidelberg.
Lin CL, Fan KC (2004) Biometric verification using thermal images of palm-dorsa vein patterns. Circuits and Systems for Video Technology, IEEE Transactions on, 14(2), 199-213.
Lu G, Zhang D, Wang K (2003) Palmprint recognition using eigenpalms features. Pattern Recognition Letters, 24(9), 1463-1467.
Kumar A, Zhang D (2006) Integrating shape and texture for hand verification. International Journal of Image and Graphics, 6(01), 101-113.
Bulatov Y, Jambawalikar S, Kumar P, Sethia S (2004) Hand recognition using geometric classifiers. In Biometric Authentication (pp. 753-759). Springer Berlin Heidelberg.
Wu Y, Huang TS (1999) Human hand modeling, analysis and animation in the context of HCI. In Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on (Vol. 3, pp. 6-10). IEEE.
Sanchez-Reillo R, Sanchez-Avila, Gonzalez-Marcos A (2000) Biometric identification through hand geometry measurements. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(10), 1168-1171.
Kumar A, Zhang D (2006, August) Combining fingerprint, palmprint and hand-shape for user authentication. In Pattern Recognition, 2006. ICPR 2006. 18th International Conference on (Vol. 4, pp. 549-552). IEEE.
Chen WS, Chiang YS, Chiu, YH (2007, November) Biometric verification by fusing hand geometry and palmprint. In Intelligent Information Hiding and Multimedia Signal Processing, 2007. IIHMSP 2007. Third International Conference on (Vol. 2, pp. 403-406). IEEE.
Ross A, Jain AK, Pankati S (1999, March) A prototype hand geometry-based verification system. In Proceedings of 2nd Conference on Audio and Video Based Biometric Person Authentication (pp. 166-171).
Kumar A, Hanmandlu M, Gupta HM (2009, July) Online biometric authentication using hand vein patterns. In Computational Intelligence for Security and Defense Applications, 2009. CISDA 2009. IEEE Symposium on (pp. 1-7). IEEE.
Hanmandlu M, Kumar A, Madasu VK, Yarlagadda P (2008, April) Fusion of hand based biometrics using particle swarm optimization. InInformation Technology: New Generations, 2008. ITNG 2008. Fifth International Conference on (pp. 783-788). IEEE.
Han CC, Cheng HL, Lin CL, Fan KC (2003) Personal authentication using palm-print features. Pattern recognition, 36(2), 371-381.
AW Moore, Support Vector Machines, Tutorial Slides, 2001, http://www.autonlab.org/tutorials/svm.html
Linear SVM image, http://areshopencv.blogspot.in/2011/07/artificial-intelligencesupport-vector.html
Dey N, Nandi B, Das P, Das A Chaudhary SS (2013) Retention of electrocardioGram features insiGnificantly devalorized as an effect of watermarkinG for. Advances in Biometrics for Secure Human Authentication and Recognition, p.175
Nandi S, Roy S, Dansana J, Karaa WBA, Ray R, Chowdhury SR, Chakraborty S, Dey N (2014) Cellular Automata based Encrypted ECG-hash Code Generation: An Application in Inter human Biometric Authentication System.International Journal of Computer Network and Information Security, 6(11), 1.
Biswas S, Roy AB, Ghosh K, Dey N (2012) A Biometric Authentication Based Secured ATM Banking System. International Journal of Advanced Research in Computer Science and Software Engineering, ISSN, 2277.
Dey N, Nandi B, Dey M, Biswas D, Das A, Chaudhuri SS (2013, February) Biohash code generation from electrocardiogram features. InAdvance Computing Conference (IACC), 2013 IEEE 3rd International (pp. 732-735). IEEE.
Dey M, Dey N, Mahata SK, Chakraborty S, Acharjee S, Das A (2014, January) Electrocardiogram Feature based Inter-human Biometric Authentication System. In Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on (pp. 300-304). IEEE.
Acharjee S, Chakraborty S, Karaa WBA, Azar AT, Dey N (2014) Performance evaluation of different cost functions in motion vector estimation. International Journal of Service Science, Management, Engineering, and Technology (IJSSMET), 5(1), 45-65.
Dey N, Das A, Chaudhuri SS (2012) Wavelet based normal and abnormal heart sound identification using spectrogram analysis. arXiv preprint arXiv:1209.1224.
Dey N, Das S, Rakshit P (2011) A novel approach of obtaining features using wavelet based image fusion and Harris corner detection. Int J Mod Eng Res, 1(2), 396-399.
Kaliannan J, Baskaran A, Dey N (2015) Automatic generation control of thermal-thermal-hydro power systems with PID controller using ant colony optimization. International Journal of Service Science, Management, Engineering, and Technology (IJSSMET), 6(2), 18-34.
Bose S, Chowdhury SR, Sen C, Chakraborty S, Redha T, Dey N (2014, November) Multi-thread video watermarking: A biomedical application. In Circuits, Communication, Control and Computing (I4C), 2014 International Conference on (pp. 242-246). IEEE.
Bose S, Chowdhury SR, Chakraborty S, Acharjee S, Dey N (2014, July) Effect of watermarking in vector quantization based image compression. In Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on (pp. 503-508). IEEE
Chakraborty S, Samanta S, Biswas D, Dey N, Chaudhuri SS (2013, December). Particle swarm optimization based parameter optimization technique in medical information hiding. In Computational Intelligence and Computing Research (ICCIC), 2013 IEEE International Conference on (pp. 1-6). IEEE.
Kumar R, Chandra P, Hanmandlu M (2011, November) Fingerprint matching based on orientation feature. In Advanced Materials Research (Vol. 403, pp. 888-894). Trans Tech Publications.
Kumar R, Chandra P, Hanmandlu M (2013) Fingerprint Matching Based on Texture Feature. In Mobile Communication and Power Engineering (pp. 86-91). Springer Berlin Heidelberg.
Kumar R, Chandra P, Hanmandlu M (2013, December) Local directional pattern (LDP) based fingerprint matching using SLFNN. In Image Information Processing (ICIIP), 2013 IEEE Second International Conference on (pp. 493-498). IEEE.
Kumar R, Chandra P, Hanmandlu M (2013, December) Fingerprint matching using rotational invariant image based descriptor and machine learning techniques. In Emerging Trends in Engineering and Technology (ICETET), 2013 6th International Conference on (pp. 13-18). IEEE.
Kumar R, Chandra P, Hanmandlu M (2014) Rotational invariant fingerprint matching using local directional descriptors. International Journal of Computational Intelligence Studies, 3(4), 292-319.
Kumar R, Chandra P, Hanmandlu M (2012) Statistical Descriptors for Fingerprint Matching. International Journal of Computer Applications, 59(16).
Kumar R, Hanmandlu M, Chandra P (2014) An empirical evaluation of rotation invariance of LDP feature for fingerprint matching using neural networks. International Journal of Computational Vision and Robotics, 4(4), 330-348.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Hand Image based Personal Authentication System. Available from:<https://www.essaysauce.com/computer-science-essays/hand-image-based-personal-authentication-system/> [Accessed 21-12-24].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.