Skip to content

Protecting biometric information within images by using AI/ML techniques

License

Notifications You must be signed in to change notification settings

hwk06023/Your-fingerprints-are-precious

Repository files navigation

Your Fingerprints are precious

Project explanation

The important of cameras in our society is increasing as the demand for Online Education, Video Conference, or SNS increases. But resents high resolution cameras accidentally expose our important biometric information such as fingerprints or iris. Exposed fingerprints can easily be copied and can pass electric security.

So, we propose to manipulate biometric information automatically that enhance people’s biometric security.

Related paper & news

Paper
Real-Time Flying Object Detection with YOLOv8, 2023, Object Detection/Segmentation/Classification, Configurable for fast and simple architecture for localizing fingerprints and iris in pixel level. Unlike Hand keypoint approach, it demands lots of fingerprint and iris data to train model.

U-Net: Convolutional Networks for Biomedical Image Segmentation, 2015, Autoencoder, Can manipulate images' detailed information such as fingerprint without making discomfort of human vision. It doesn't conside

고해상도로 찍은 이미지에서의 손가락 지문 채취 방지에 관한 연구, 2020,

News
Chaos Computer Clubs breaks iris recognition system
Scientists Extract Fingerprints from Photos Taken From up to Three Meters Away



Proposed model

Step 1: Labeling & Annotating iris and fingerprints

Labeling & Annotating fingertip and iris from selfie dataset with instance segmentation format.

Step 2: Train Yolov8n Instance Segmentation

Train Yolov8 nano model instance segmentation model with dataset we made at Step 1.

Step 3: Train Reconstruction Model

Train Auto Encoder & U-Net Architecture model with Identity Loss to make model reconstruct input image as same as they can. Nature of Auto Encoder architecture will destroy subtle feature like fingerprint.

Result (= Output image)

We can check that fingerprint is efficiently destroyed without making discomfort to the human eye.



Using Dataset

We used Microsoft's ASL Citizen the first crowdsourced isolated sign language video dataset for training Yolov8n model.


This Annotation task was automated by using dlib.get_frontal_face_detector


Additional, We used hagrid dataset(FHD) and Roboflow's dataset for training for reconstruct(U-net ..) model.


Experiment

How we performed experiments

1. Yolov8n Instance Segmentation

- Performance

Class Images Instances Box( P R mAP50 mAP50-95 )
all 217 666 0.831 0.725 0.82 0.456
- 217 422 0.856 0.988 0.988 0.581
- 217 244 0.806 0.461 0.653 0.331

- Speed

Speed : 0.9ms preprocesse, 5.5ms inference, 0.0ms loss, 3.0ms postprocess (per image)


2. Reconstruction

- Speed

AutoEncoder (Vanilla) AutoEncoder (Conv) U-Net
0.0014s 0.0036s 0.05s

- Output

Original AutoEncoder (Vanilla) AutoEncoder (Conv) U-Net

Due to the nature of skip-connection in U-net, performance is very good for reconstruction However, compared to other Auto Encoders, the experimental results show a drop in the speed area, which will be very important when applied to real life such as real-time. We test various U-net models currently available and propose a U-Net structure-based reconstruction model that performs the purpose of modulation with less damage.

In order to further reduce the number of parameters, pruning was performed to remove the weight of the network based on the normalized value of the weight, but pruning was not performed because there was a problem that could create a sense of incompatibility in the picture where the color tone of the result value changed.

- Speed (with cuda)

U-Net3p U-Net U-Net_light
0.013s 0.0066s 0.0031s

FPS = 1000 // 0.0031 = 322

- Summary

U-Net

U-Net_light (Our suggestion)

- Result

The parameter kernel layer has been drastically reduced.

This task reliably improves the inference speed.

This is because the features that can be expressed within the fingerprint itself are limited anyway. And because Reconstruction itself is not a task with a clear answer, Performance indicators are more unclear than they are used for segmentation purposes, These tasks were possible because only the purpose (modulation with less damage) was achieved.


3. Enhancement

When Segmentation is incomplete, Since there is no guarantee that the autoencoder will always be the same color as the original image, apply a Gaussian filter to the mask to ensure the color lasts naturally.



Total result

It works well with the original image and the result is as expected.

5.5ms for inference(Segment), 0.0031s for reconstruction, 0.0001s for enhancement.
So, Total 5.5032ms for processing.
It works fps 181.5.

Fingerprints (part of the original image)

Before -> After

Original Image







Previous version

Check previous version

About

Protecting biometric information within images by using AI/ML techniques

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published