Blog: Create a web app to show the age estimation from the detected human faces
The IBM Model Asset Exchange (MAX) gives application developers without data science experience easy access to prebuilt machine learning models. This code pattern shows how to create a simple web application that gets permission from a user to access the webcam and then visualizes the output through the Facial Age Estimator MAX model. Specifically, the web app uses the webcam to transmit the streaming video to the MAX model, receives the estimated ages and the bounding boxes, and then displays the result through the web UI.
The biological age of people often provides significant information for applications, for example, surveillance or product recommendations. Existing commercial devices such as mobile phones and webcams are used to create visual data (images and videos). Given human faces as visual data, the Facial Age Estimator model predicts the ages of the detected faces. With the predicted ages, the information can be applied to different algorithms such as “grouping,” which provides the observations in statistics — different groups of people for various activities.
In this code pattern, we use one of the models from the Model Asset Exchange, an exchange where developers can find and experiment with open source deep learning models. Specifically, we use the Facial Age Estimator to create a web application that detects human faces and then outputs the ages with the bounding boxes of the associated detected faces. The web application provides a user-friendly interface backed by a lightweight Python server. The server takes webcam images as input through the UI and sends them to a REST endpoint for the model. The model’s REST endpoint is set up using the Docker image provided on MAX. The web UI displays the estimated age with the associated bounding box for each person.
You can read the full code pattern on IBM Developer.
Originally published at https://developer.ibm.com.