a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: DeepPiCar: Part 3(Intro to TensorFlow and OpenCV)

Blog: DeepPiCar: Part 3(Intro to TensorFlow and OpenCV)


Executive Summary

Welcome back! If you have been following my previous two posts on DeepPiCar, you should have a running robotic car that can be controlled via Python 3 code. In the article, we will give your car the superpower of Computer Vision and Deep Learning. By the end of the article, it would be transformed into a true DeepPiCar, which can detect and identify objects in your room.

OpenCV for Computer Vision

Note that the only input to our PiCar’s logic is a USB video camera. A video camera will give us a live video, which is essentially a sequence of pictures. We will use OpenCV, a powerful open source computer vision library, to capture and transform these pictures so that we can make sense of what the camera is seeing. Run the following commands (in bold) to install it on your Pi.

Install Open CV

# install all dependent libraries of OpenCV (yes, this is one long command)
pi@raspberrypi:~ $ sudo apt-get install libcblas-dev -y && sudo apt-get install libhdf5-dev -y && sudo apt-get install libhdf5-serial-dev -y && sudo apt-get install libatlas-base-dev -y && sudo apt-get install libjasper-dev -y && sudo apt-get install libqtgui4 -y && sudo apt-get install libqt4-test -y
# install OpenCV
pi@raspberrypi:~ $ pip3 install opencv-python
Collecting opencv-python
Downloading https://www.piwheels.org/simple/opencv-python/opencv_python-3.4.4.19-cp35-cp35m-linux_armv7l.whl (7.4MB)
100% |████████████████████████████████| 7.4MB 43kB/s
Collecting numpy>=1.12.1 (from opencv-python)
Downloading https://files.pythonhosted.org/packages/cf/8d/6345b4f32b37945fedc1e027e83970005fc9c699068d2f566b82826515f2/numpy-1.16.2.zip (5.1MB)
100% |████████████████████████████████| 5.1MB 60kB/s
Building wheels for collected packages: numpy
Running setup.py bdist_wheel for numpy ... done
Stored in directory: /home/pi/.cache/pip/wheels/8c/a8/49/e458f0fdbc4fe3759be6b9371e172b3f0e82c09f5a750e977e
Successfully built numpy
Installing collected packages: numpy, opencv-python
Successfully installed numpy-1.16.2 opencv-python-3.4.4.19

Test OpenCV Installation

Here is the most basic test to see if OpenCV is installed. The Python module name for OpenCV is cv2. If you don’t see any error when running this command, then your OpenCV module should be installed correctly.

pi@raspberrypi:~ $ python3 -c "import cv2"

Let’s try to capture some videos from our USB camera.

pi@raspberrypi:~ $ cd
pi@raspberrypi:~ $ git clone https://github.com/dctian/DeepPiCar.git
Cloning into 'DeepPiCar'...
remote: Enumerating objects: 482, done.
remote: Counting objects: 100% (482/482), done.
remote: Compressing objects: 100% (348/348), done.
remote: Total 482 (delta 185), reused 415 (delta 129), pack-reused 0
Receiving objects: 100% (482/482), 76.33 MiB | 580.00 KiB/s, done.
Resolving deltas: 100% (185/185), done.
pi@raspberrypi:~ $ cd DeepPiCar/driver/code
pi@raspberrypi:~ $ python3 opencv_test.py

If you see two live video screens, one colored and one black/white, then your OpenCV is working! Press q to quit the test. Essentially, the program takes the images captured from the camera and displays it as is (the Original window), and then it converts the image to a black and white image (the B/W window). This is very important in our lane line detection project, as we will bring up as many as 9-10 screens as the original video images will be processed through many stages, like this.

TensorFlow for EdgeTPU

Google’s TensorFlow is currently the most popular python library for Deep Learning. It can be used for image recognition, face detection, natural language processing, and many other applications. There are two methods to install TensorFlow on Raspberry Pi:

  • TensorFlow for CPU
  • TensorFlow for Edge TPU Co-Processor (the $75 coral branded USB stick)

The first method is a more involved process which can take a few hours to install. Moreover, since TensorFlow models will run on CPU, the performance will sluggish.

The second method will be preferred, as the installation is simple, and we get a lot of performance gains since TensorFlow models will run on the Edge TPU coprocessor instead of the CPU. However, at the time of writing, Edge TPU cannot run all models that can run on CPU, so we have to choose our model architecture carefully and make sure they will work on the TPU. For more details on what models can run on Edge TPU, please read this article by Google.

Install TensorFlow for Edge TPU

When asked if you want to enable the maximum operating frequency, answer y. The models we run will be relatively lightweight for the TPU, I have never seen it run very hot.

pi@raspberrypi:~ $ cd 
pi@raspberrypi:~ $ wget https://dl.google.com/coral/edgetpu_api/edgetpu_api_latest.tar.gz -O edgetpu_api.tar.gz --trust-server-names
--2019-04-20 11:55:39-- https://dl.google.com/coral/edgetpu_api/edgetpu_api_latest.tar.gz
Resolving dl.google.com (dl.google.com)... 172.217.10.78
[omitted]
edgetpu_api.tar.gz 100%[===================>] 7.88M 874KB/s in 9.3s
2019-04-20 11:55:49 (867 KB/s) - ‘edgetpu_api.tar.gz’ saved [8268747/8268747]
pi@raspberrypi:~ $ tar xzf edgetpu_api.tar.gz
pi@raspberrypi:~ $ cd edgetpu_api/
pi@raspberrypi:~/edgetpu_api $ bash ./install.sh
Would you like to enable the maximum operating frequency? Y/N
y
Using maximum operating frequency.
Installing library dependencies...
[omitted]
Installing Edge TPU Python API...
Processing ./edgetpu-1.9.2-py3-none-any.whl
Installing collected packages: edgetpu
Successfully installed edgetpu-1.9.2
# restart the pi just to complete the installation
pi@raspberrypi:~/edgetpu_api $ sudo reboot now

After reboot, let’s try to test it by running a live object detection program.

pi@raspberrypi:~ $ cd
pi@raspberrypi:~ $ git clone https://github.com/dctian/DeepPiCar.git
pi@raspberrypi:~ $ cd DeepPiCar/models/object_detection/
pi@raspberrypi:~/DeepPiCar/models/object_detection $ python3 code/coco_object_detection.py 
W0420 12:36:55.728087 7001 package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10).
couch, 93% [[ 4.81752396 167.15803146]
[381.77787781 475.49484253]] 113.52ms
book, 66% [[456.68899536 145.12086868]
[468.8772583 212.99516678]] 113.52ms
book, 58% [[510.65818787 229.35571671]
[534.6181488 296.00133896]] 113.52ms
book, 58% [[444.65190887 222.51708984]
[467.33409882 290.39138794]] 113.52ms
book, 58% [[523.65917206 142.07738876]
[535.19741058 213.77527237]] 113.52ms
------
2019-04-20 12:36:57.025142: 7.97 FPS, 125.46ms total, 113.52ms in tf

You should see a live video screen coming up, and it will try to identify objects in the screen. Note that the COCO (Common Object in COntext) object detection model can detect about 100 common objects, like person, chair, TV, couch, book, laptop, cell phone, etc.

Source: Artificial Intelligence on Medium

(Visited 13 times, 1 visits today)
Post a Comment

Newsletter