Blog: Embrace your new look with Hair Segmentation by Fritz — Now available for iOS developers

Today, we’re excited to launch Hair Segmentation by Fritz, giving developers and users the ability to alter their hair with different colors, designs, or images.


Try it out for yourself. You can download our demo app on the App Store and play around with hair coloring.
What is Hair Segmentation?
Hair segmentation, an extension of image segmentation, is a computer vision task that generates a pixel-level mask of a user’s hair within images and videos (live, recorded, or downloaded; from front- or rear-facing cameras).
This feature brings a fun and engaging tool to photo and video editing apps — your users can change up their look for social media, try on a custom hair dye color, trick their friends into thinking they got a purple streak, or support their favorite sports team by stylizing their hair.
With Fritz, any developer can easily add this feature to their apps. In this post, we’ll show you how.
Set up your Fritz account and Xcode Project
Create a new project in Xcode. Choose your bundle ID and make sure that the project builds.
Next, sign up for a free Fritz account if you don’t already have one. Follow these instructions to initialize and configure our SDK for your app.
For a completed demo, download the iOS demo Xcode project.
Set Up Camera and Display Live Video
Next, we’re going to add the camera to our app. You can use Hair Segmentation on still images or video from the camera roll or live preview. For this tutorial, we’ll be using a real-time video feed from the camera.
First things first, we need to add camera permissions to the Info.plist
file. As a key, add “Privacy — Camera Usage Description” with a description:
To display frames from the camera, we need to configure theAVCaptureVideoDataOutput
object.
Eventually, we’re going to be passing frames from the video output to the hair segmentation model, but for now we’re going to stream them directly to a UIImageView.
First, we have to set the pixelBuffer
format to kCVPixelFormatType_32BGRA
for the segmentation model.
For this demo, we’ll be using the front camera so we can segment our own hair. We’ll set the videoOrientation
to .portrait
so that the image is fed into the captureOutput
delegate function in the correct orientation.
To make sure we have everything hooked up correctly, let’s display the raw video coming from the camera. For reasons that will become apparent in the next step, we’re going to display the camera output via a UIImageView
.
When we run the app, we should see normal video displayed. Here we make sure to update the cameraView
asynchronously so we don’t block the UI Thread:
Run this and you should see live video in your app!
Create the Hair Predictor
Once you have the video displaying your preview view, it’s time to run the hair segmentation model.




Images can come from a camera, a photo roll, or live video.
Initialize the hair segmentation model as a variable in the view controller and run the model in the captureOutput
function:
The model’s output is a FritzVisionSegmentationResult
object. For more details on the different access methods, take a look at the official documentation. We want a mask containing hair pixels. The easiest way to work with the result is to call its buildSingleClassMask
function.
The output is a UIImage
that can overlay the input image to the model. The color of each pixel represents the class the model predicts. For our hair model, red pixels represent hair




Blend mask with original image
Now that we have our hair mask, we can blend it with our original image.
Blending the image is as easy as calling fritzImage.blend(mask)
. We can then show the blended image — and Voila, we have a hair segmentation mask.
Here’s the final result of the blended mask:




Change the blending mode
When combining the mask with the original image, the blending mode determines how pixel values are combined. You can choose any blending CGBlendMode.




.softLight
is the default mode, but .color
and .hue
also produce interesting results.
Tweak mask sensitivity
Finally, we’ll take a look at the segmentation mask. With the prediction result, you can produce masks of varying sensitivity. The two important parameters are clippingScoresAbove
and zeroingScoresBelow
. Confidence scores output by the model are between 0 and 1. All confidence scores above clippingScoresAbove
will go to 1. All confidence scores below zeroingScoresBelow
will go to zero. You can modify these parameters to create a soft edge.
Check out our GitHub repo for the finished implementation.
With Hair Segmentation, developers are able to create new “try on” experiences without any hassle (or hair dye). Simply add a couple of lines of code to create deeply engaging features that help distinguish your Android app from the rest.
Create a free Fritz account to get started. For additional resources, dive into the documentation or see a full demonstration in the open source Heartbeat app.
Editor’s Note: Ready to dive into some code? Check out Fritz on GitHub. You’ll find open source, mobile-friendly implementations of the popular machine and deep learning models along with training scripts, project templates, and tools for building your own ML-powered iOS and Android apps.
Join us on Slack for help with technical problems, to share what you’re working on, or just chat with us about mobile development and machine learning. And follow us on Twitter and LinkedIn for all the latest content, news, and more from the mobile machine learning world.
Leave a Reply