Why this name?
The name of the project is just an exclamation to steer through the monotonous work, in today’s world of spreading automation technologies.
A Free, Offline, Real-Time, Open-source(MIT Licensed) web-app to assist organisers of any event in allowing only authorised/invited people using Face-Recognition Technology or QR Code.
It is made to automate the task of authenticating people and no longer need to check invitation cards, checking required apps installed(like in most College Fests) etc.
The project uses MaterializeCSS framework based on Google Material Design Guidelines.
The project is truly Offline and is built using the pre-trained dlib model which has 99.38% accuracy(adult people) in Label Faces in the Wild Dataset.
The person needs a leading browser(preferably Chrome) to use the web-app and UNIX based system MacOS or Linux to interact/use the project.
The User Interface of the Web App when opened on Google Chrome.
The person has to feed the images of invited people in the app, store information in the database, and train the model before using on those images.
The user would have the option to choose to track the Incoming/Outgoing people. First, the person’s face is recognized using the camera and name displayed if his/her info is saved on the server. The name is shown in the red colour. The information of this person is then compared to the record searched in MongoDB database. If the information matches and the person is allowed the name is shown in green colour.
If the above method is not done successfully because of any reason then there is an additional method or fall-back method to be used i.e. QR Code. The person can show his QR Code in which his unique ID is stored which is then searched in the MongoDB database for authentication.
There are 3 layers of security.
- Face Recognition
- MongoDB Database information cross-verify
- Information cross-check physically
The front-end of the app also shows all the information of the person stored in the database. The person can be asked to confirm the confidential information stored in the database in case of some ambiguity.
Barack Obama has been recognized with the face and his data cross-checked with the information stored in the MongoDB database, thus shown in green.
Shubham Malik(me) has his face verified thus the name is shown but the information couldn’t be cross-checked with the database as no such record was saved thus red colour.
There is a small blue icon showing QR Code which is to be used to read the QR Code in case the person can’t be identified or may be misidentified(in the case of children) name would be shown as Unknown(if not identified). The user has to halt the app so that the information stored doesn’t get updated as per real-time data. Click on the QR Code button to scan the code. The user is notified if the QR Code is legit and if the person is authenticated or not. The user then has to resume the app after QR Code scanning is done, so that data can be updated once again in real-time.
When the person is leaving check the Out button to log his outgoing time. Helpful notifications are shown to the user like while authentication, some error occurs etc.
Authenticating people this way, for free, NO INTERNET, no API limit so it should be a breeze.
In the future data insights feature would be added for the data analysis using the chart, graphs etc.
All of this happens inside the computer so no internet is required.
DATA STRUCTURES and ALGORITHMS
Currently, only face recognition uses the algorithms detecting faces, finding the encoding, matching the encoding with the known encodings using euclidean distance (face_recogntion python package is used for that which have these functions written). Dlib model used is pre-trained so no external algorithms are applied as of now. As far as the data structure is concerned Numpy arrays are used to store the multi-dimensional encoding information of face.
Currently, it uses Linear search to find the required encoding within allowed error limit in O(N) time. In the future, I would try to use the Binary Search to pull it down to O(logN). But the data is multi-dimensional (around 150) dimensions so it would take time.
- Visual Studio Code
- Google Chrome Web Browser
- MongoDB Database
These are some of the use cases which I can see, though I have not used them in real world conditions myself.
This project would find its use at every place where authentication is required and still done manually. It includes College Fests, Events, Meetups, Corporate Functions, Parties etc.
Opportunity for Community
This project leverages the power of both Node.js and Python in a single app. It can be used for other purposes also. Popular libraries like Tensorflow, PyTorch which are for Python can be used in Node.js web-app to harness the power of Machine Learning in an app by creating a model for your use case and using it without sending data to third party companies with API limits. It can be used for Online learning / Offline Learning the model with your data to improve accuracy with time and complete privacy. Language should no longer be a barrier to included ML in your app. You don’t need to know the complete Python development stack to use Python ML libraries in your Node.js web-app. Node is here to live :).
It can be used by educational institutions to teach the students to see/experience the power of Machine Learning, Deep Learning with an interactive Graphical User Interface(GUI). It would encourage them to leverage this project to build something to help other people.
Experience it by running it on your machine and contact me for hugs or bugs!
WHY IT’S DIFFERENT ?
- One of a kind project, maybe only of its kind. No backup codes are there on the internet to consult if something goes wrong. Open-ended project.
- Stackoverflow community closed/downvoted similar type of questions “face recognition in Node.js”. It is a torch-bearer for all of those.
- Before this project: this(possible with 3rd party APIs), this(closed by community).
- After this project: this(relevant solution, my first upvote), this(answer edited to include this project).
I googled a lot. I used StackOverflow maybe around 50+ times, googled around 100+ times. I read this post in initial stages to understand. After that, I was on my own along with Google Search. A lot more is in the pipeline. Hint: Speech support, let your imaginations fly.
- Got offers from 4 online publications to publish my post (where I shared my experience building this project, talking about how it works, why I did it, how I did it, what it means for me, whom it is for…).
- Published in Hacker Noon. Link to article
- 75+ stars, 10+ forks on GitHub
- Built in a few weeks. Open-Sourced on July 23, 2017. MIT Licensed.
- A great learning experience which I can’t get with already done projects.
I came to know about ProGeek when it was launched. In summer vacations some incident happened with me that lead to this project. I thought it might be a good idea to have a working prototype of the project along with the idea.
Disclaimer: This content belongs to geeksforgeeks, source: http://geeksforgeeks.org