Multi-Modal User Authentication Using Biometrics

William Cheung, Fordham University

Abstract

The connectivity of smart technology is ever increasing with the expansion of internet availability. Emerging applications such as financial transactions, healthcare check-ups, and property access can be made through smart technologies. This also presents a new vulnerability as hackers have more opportunities to attack users. Therefore, there is an immediate emphasis on a strong authentication system. While passwords, PINs, or pattern locks can overwhelm users, active biometric schemes like retina scans require active use and cannot be used in continuous situations. A solution to this is the use of implicit continuous biometrics such as heart rate, gait, and breathing patterns. In this work, we present two context-dependent soft-biometric-based wearable authentication system strategies utilizing the heart rate, gait, and breathing audio signals. From our detailed analysis, we find that in a sedentary state, a binary support vector machine with radial basis function (RBF) kernel can achieve an average accuracy of $0.94 \pm 0.07$ and $F_1$ score of $0.93 \pm 0.08$. In a non-sedentary state, $k$- Nearest Neighbors ($k$=2) can achieve an average accuracy of $0.93 \pm 0.06$ and $F_1$ score of $0.93 \pm 0.03$, which shows the promise of this work. Considering the availability of a single user’s data, we develop unary models and obtain an average accuracy of $0.72 \pm 0.10$ and a $F_1$ score $0.73 \pm 0.06$ when sedentary and $0.72 \pm 0.10$ and a $F_1$ score $0.72 \pm 0.09$ respective scores when non-stationary.

Subject Area

Computer science|Information science|Bioinformatics

Recommended Citation

Cheung, William, "Multi-Modal User Authentication Using Biometrics" (2021). ETD Collection for Fordham University. AAI28315954.
https://research.library.fordham.edu/dissertations/AAI28315954

Share

COinS