a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Embrace your new look with Hair Segmentation by Fritz—Now available for Android developers

Blog: Embrace your new look with Hair Segmentation by Fritz—Now available for Android developers


Today, we’re excited to launch Hair Segmentation by Fritz, giving developers and users the ability to alter their hair with different colors, designs, or images.

Try it out for yourself on Android. You can download our demo app on the Google Play Store to play around with hair coloring.


What is Hair Segmentation?

Hair segmentation, an extension of image segmentation, is a computer vision task that generates a pixel-level mask of a user’s hair within images and videos (live, recorded, or downloaded; from front- or rear-facing cameras).

This feature brings a fun and engaging tool to photo and video editing apps — your users can change up their look for social media, try on a custom hair dye color, trick their friends into thinking they got a purple streak, or support their favorite sports team by stylizing their hair.

With Fritz, any developer can easily add this feature to their apps. In this post, we’ll show you how.


Set up your Fritz account and initialize the SDK

First, you’ll need a free Fritz account if you don’t already have one. Sign up here and follow these instructions to initialize and configure our SDK for your app.

If you don’t have an app but want to get started quickly, I’d recommend cloning our camera template app.

git clone https://github.com/fritzlabs/fritz-android-tutorials.git

To follow along, import skeleton-live-video-app as a new project in Android Studio and then add the relevant code throughout this tutorial in the MainActivity.

Add the Hair Segmentation model via Gradle

In your app/build.gradle file, you can include the dependency with the following:

dependencies {
implementation 'ai.fritz:core:3.3.+'
implementation 'ai.fritz:vision:3.3.+'
implementation 'ai.fritz:vision-hair-segmentation-model:3.3.+'
}

This includes the hair segmentation model in the app. Under the hood, we use TensorFlow Lite as our mobile machine learning framework. In order to make sure that the model isn’t compressed when the APK is built, you’ll need to add the following in the same build file under the android option.

android {
aaptOptions {
noCompress "tflite"
}
}

*You may also choose to lazy load the hair segmentation model to reduce your initial APK size. Follow the directions here under “Get a Segmentation Predictor.”

Create a Segmentation Predictor with a Hair Segmentation model

To run segmentation on an image, we’ve created a Predictor class that simplifies all the pre- and post-processing for running a model in your app. Create a new predictor with the following:

Choose a hair color for prediction

You’ll use a mask overlay to blend the color with the original image. You can set the color of the overlay to whatever you’d like:

MaskType.HAIR.color = Color.RED;

This will tell the predictor to set any hair detected pixels as red.

Run prediction on an image to detect hair

Images can come from a camera, a photo roll, or live video.

We’ll use this sample image as the input.

In the code below, we convert an android.media.Image object (YUV_420_888 format) into a FritzVisionImage object to prepare it for prediction. This is usually the case when reading from a live camera capture session.

You may also convert a Bitmap to a FritzVisionImage.

FritzVisionImage visionImage = FritzVisionImage.fromBitmap(bitmap);

After you’ve create a FritzVisionImage object, call predictor.predict.

This will return a segmentResult that you can use to display the hair mask. For more details on the different access methods, take a look at the official documentation.

Blend the mask onto the original image

Now that we have the result from the model, let’s extract the mask and blend it with the pixels on the original image.

First, pick one of 3 different blend modes:

Soft Light (left); Color (middle) ; Hue (right)

Next, let’s extract the mask for which we detected hair in the image. The Segmentation Predictor has a method called createMaskOverlayBitmap that returns a coloredBitmap of the classified pixels. In this case, red indicates detected hair.

maskBitmap

Finally, let’s blend maskBitmap with the original image.

Here’s the final result of the blendedBitmap:

The original image (left) is blended with the masked bitmap (middle; red = hair) to create the blended result (right)

Check out our GitHub repo for the finished implementation.


With Hair Segmentation, developers are able to create new “try on” experiences without any hassle (or hair dye). Simply add a couple of lines of code to create deeply engaging features that help distinguish your Android app from the rest.

Create a free Fritz account to get started. For additional resources, dive into the documentation or see a full demonstration in the open source Heartbeat app.


Editor’s Note: Ready to dive into some code? Check out Fritz on GitHub. You’ll find open source, mobile-friendly implementations of the popular machine and deep learning models along with training scripts, project templates, and tools for building your own ML-powered iOS and Android apps.

Join us on Slack for help with technical problems, to share what you’re working on, or just chat with us about mobile development and machine learning. And follow us on Twitter and LinkedIn for all the latest content, news, and more from the mobile machine learning world.

Source: Artificial Intelligence on Medium

(Visited 45 times, 1 visits today)
Post a Comment

Newsletter