Blog

ProjectBlog: Heartbeat Newsletter Vol. 48

Blog: Heartbeat Newsletter Vol. 48


Fritz Hair Segmentation, pose estimation for mobile, the power (and limitations) of synthetic data, and more

Add Hair Segmentation to your app

Hair Segmentation by Fritz gives your app’s users the ability to alter hair color, style, and appearance in images and video. Everything runs on-device for both Android and iOS.

Hair segmentation, an extension of image segmentation, is a computer vision task that generates a pixel-level mask of a user’s hair within images and videos (live, recorded, or downloaded; from front- or rear-facing cameras). This brings a fun and engaging feature to photo and video editing apps — your users can change up their look for social media, try on a custom hair dye color, or create new styles for their favorite memes and gifs.

With Fritz, you can now add Hair Segmentation to your iOS and Android apps.

Keep reading to see how.

[iOS] [Android]

CODE / LIBRARIES

christiankellernc / styletransfer (iOS)

Create your own styles styled images directly on your mobile device. Allows the users to take any photograph and turn it into a an image looking like a painting. The app is written in swift and leverages python code for with TensorFlow and Keras to train models. [Link]

Neno0o / TFLite-Android-Helper
This library helps with getting started with TensorFlow Lite on Android. Written entirely in Kotlin. [Link]

BohdanNikoletti / SFaceCompare
Simple library for iOS to find and compare faces. Works on top of dlib and OpenCV. [Link]

edvardHua / PoseEstimationForMobile
Real-time single person pose estimation for Android and iOS. [Link]

LEARNING

Synthetic data: A bridge over the data moat

Over the past few years, a new data source has emerged and it’s radically changing the economics of machine learning: synthetic data. This article provides a high-level overview of synthetic data. [Link]

FastAI Sentiment Analysis

Learn how to analyze the sentiment of Tweets using the FastAI deep learning library. [Link]

How to Get a Core ML Model to Produce Images as Output

Apple’s conversion tools have decent support for machine learning models with images as input. However, for models that produce images as output, support is a bit lacking. This tutorial looks to bridge that gap. [Link]


Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social