Blog

ProjectBlog: Building your own Machine Learning API Gateway — Object, Face and Gender detection — Part III

Blog: Building your own Machine Learning API Gateway — Object, Face and Gender detection — Part III


Go to the profile of oZoneDev
Okay, Part I was a baby, Part II a kid and Part III of a grownup. I’m trying to look smart here. Glasses…

The last article in the trifecta.

So what did we learn so far?

  • Part I — explained thinking process, and selection of tools/libraries
  • Part II — was all about core plumbing for any generic API gateway — Authentication/Authorization, DB, API routes

So far, we walked through:

  • Creating a user record
  • How to invoke /api/v1/login to get a token
  • How to send a api/v1/detect/object token with different parameters to request for analysis

Object Detection: The Object class implementation

Recall in Part II, in api.py, we set up a route that said if we get a api/v1/detect/object, we will pass it onto a Detect class that basically:

  • uploaded the file in question
  • invoked a detect method in a ObjectDetect class

Well, this is the class:

That’s it. Not a fragment. The full code.

  • Lines 1–4: Imports Arun Ponnusamy’s cvlib library and a convenience function to draw boxes
  • Lines 15–18: Simply reads the file that was received from the client app
  • Line 19: Umm, thats the machine learning code. One line. CVlib wraps it all into a line. That’s why I said it was really easy. It returns back what it found (labels), where it found it (rectangular coordinates) and a confidence score (confidence). Internally, cvlib uses YoloV3 for object detection, which is a good library. If you want to dive in deeper, read his cvlib code
  • Lines 21–24: If you did not give a delete=true query parameter, we write the image with the bounding boxes and labels. So if delete=false the server will store both the downloaded image, say 18f38d42-afcf-4af0-b3b6-dfc9f8e2e8f8.jpg and 18f38d42-afcf-4af0-b3b6-dfc9f8e2e8f8-object.jpg like so:
curl -XPOST "http://localhost:5000/api/v1/detect/object?delete=false" -F "file=@1.jpg" -H "Authorization
: Bearer ${ACCESS_TOKEN}"
[{"type": "car", "confidence": "99.95%", "box": [30, 242, 520, 456]}, {"type": "person", "confidence": "99.72%", "box": [542, 278, 606, 432]}, {"type": "per
son", "confidence": "99.05%", "box": [603, 344, 633, 432]}]
Original image
processed image after object detection

Or for Face / Gender Detection:

curl -XPOST "http://localhost:5000/api/v1/detect/object?type=face&gender=true&delete=false" -F "file=@2.j
pg" -H "Authorization: Bearer ${ACCESS_TOKEN}"
[{"type": "face", "confidence": "99.92%", "box": [1356, 228, 1905, 959], "gender": "woman", "gender_confidence": "99.85%"}, {"type": "face", "confidence": "
97.49%", "box": [2136, 364, 2929, 1361], "gender": "man", "gender_confidence": "99.64%"}, {"type": "face", "confidence": "97.33%", "box": [3044, 770, 3890,
1822], "gender": "woman", "gender_confidence": "100.00%"}]
original
processed image after face + gender detection

Right, Face Detection

  • Lines 13–16: Like object detection, read the image
  • Line 17: Face detection. Really.
  • Lines 35–45: If gender=true in the url, then for each face, we simply pass just the face crop of the image to gender detection. cvlib then applies gender detection on those fragments

That’s it. I told you. The Machine Learning part would be anti-climactic. It’s a good thing. You should know there are many good libraries out there just for you to pick and use. Arun’s library is one of the easiest. Once you get a hang of things, explore more powerful models like those from dlib’s CNN face recognition model (not just detection) via Adam Geitgey’s library, which I use extensively in my own ES hooks

Conclusion

My goal was to create my own API gateway so I could feed it to projects that needed it. You’d think performance wouldn’t be great, but I’m very satisfied. As long as you keep image sizes reasonable (I resize to 800), I get good performance. The largest time is in the actual detection (I’m not using a GPU). The HTTP time overhead is small compared to it. Plus I now get to run my ML code on a different machine, which is very useful for embedded projects like when I use raspberry Pis.

Oh yes, I mentioned I’d show you a programmatic example of how to write a program to call this API gateway with live feed. Well, I got lazy. Here’s a link to my example.

I ran it on a video (you can point it to a video, a webcam, or whatever) I downloaded, and here is an output example:

Read Part I, Part II

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social