Creating GIF animations using OpenCV

3r33382. Creating GIF animations using OpenCV 3r31399.  
3r31399.  
This tutorial will show you how to create animated GIF files using OpenCV, Python, and ImageMagick. Then combine these methods to create a meme generator with OpenCV! 3r31399.  
3r31399.  
We all need to laugh from time to time. And perhaps the best way to find lulz is memes. Some of my favorites:
 
3r31399.  
3r31369.  
3r31383. Kermit the Frog: “But this is not my business”
 
3r31383. Grumpy cat
 
3r31383. Epic Fail
 
3r31383. Good guy Greg
 
3r31399.  
But for me personally, none of these memes compare with the meme ”Deal With It" ("Get over it" or "Understand Itself"), an example of which is given at the beginning of the article. 3-393950. 3-331399.  
3r340.
3r31399.  
It is usually used in the following cases: 3r31399.  
3r31399.  
3r3-131125.  
3r31383. As an answer or objection to someone who does not approve of something you have done /said (“Get over it”) 3r31384.  
3r31383. Putting on glasses as if you were leaving and leaving the person alone with the problem (“Understand yourself”) 3r31384.  
3r31136. 3r31399.  
A few years ago I was reading a funny article in an author's blog, which I can no longer remember how to generate such memes using computer vision. Last week I could not find this guide anywhere, so as a blogger, computer vision expert and meme expert, I decided to write a tutorial myself! (By the way, if you accidentally know the original source, please let me know so that I can express my gratitude to the author. 3r-3949. UPD: 3r33950. Just found the original article from Kirk Kaiser's blog, 3r3r61. MakeArtWithPython
). 3r31399.  
3r31399.  
Developing a mec generator on OpenCV will teach us a number of valuable practical skills, including: 3r31399.  
3r31399.  
3r3-131125.  
3r31383. Detection of persons with the help of deep learning techniques 3r31384.  
3r31383. Using the dlib library to locate face landmarks and extract eye areas
 
3r31383. How to calculate the angle of rotation between the eyes on the basis of the received information 3r3131384.  
3r31383. And finally, how to generate animated GIFs using OpenCV (with a little using ImageMagick) 3r31384.  
3r31136. 3r31399.  
This guide should be fun and entertaining - and at the same time teach you valuable computer vision programming skills that will be useful in the real world. 3r31399.  
3r31399.  

Creating gifs with OpenCV

3r31399.  
In the first part of the guide, we will discuss the necessary conditions and dependencies for this project, including the correct setting of the development environment. 3r31399.  
3r31399.  
Then we will consider the project /directory structure for our gif generator on OpenCV. 3r31399.  
3r31399.  
As soon as we understand the structure of the project, we will consider: 1) our configuration file; 2) Python script responsible for creating a GIF with OpenCV. 3r31399.  
3r31399.  
Finally, we will evaluate the results of the work of the program on the popular meme “Deal With It”. 3r31399.  
3r31399.  
3r? 31303. Prerequisites and dependencies
3r31399.  
3r3116. 3r31399.  
[b] Fig. 1. To create gifs we will use OpenCV, dlib and ImageMagick
3r31399.  
3r31399.  

OpenCV and dlib

3r31399.  
OpenCV is needed to identify faces in the frame and basic image processing. Follow one of my installation guides for OpenCV if OpenCV is not installed on the system. 3r31399.  
3r31399.  
Dlib is used to detect facial landmarks, which will allow us to find two eyes on the face and put on sunglasses over the top. You can install dlib using this instruction . 3r31399.  
3r31399.  
3r? 31303. ImageMagick
3r31399.  
If you are not familiar with r3r3145. ImageMagick
it is in vain. This is a cross-platform command line tool with many image processing functions. 3r31399.  
3r31399.  
Do you want to convert PNG /JPG to PDF with one command? No problem. 3r31399.  
3r31399.  
There are several images from which you need to make a multipage PDF? Easily. 3r31399.  
3r31399.  
Need to draw polygons, lines and other shapes? And it is possible. 3r31399.  
3r31399.  
What about batch color adjustments or resizing of all pictures with one team? To do this, you do not need to write a few lines in Python for OpenCV. 3r31399.  
3r31399.  
ImageMagick also generates gifs from any images. 3r31399.  
3r31399.  
To install ImageMagick on Ubuntu (or Raspbian), just use apt:
 
3r31399.  
Create gifs with OpenCVShell
 
3r31399.  
$ sudo apt-get install imagemagick 3r31338. 3r31399.  
On macOS, you can use HomeBrew:
 
3r31399.  
$ brew install imagemagick 3r31338. 3r31399.  
3r? 31303. imutils
3r31399.  
In most articles, courses, and books, I use my handy package of image processing functions 3r3-3199. imutils
. It is installed in a system or virtual environment using pip:
 
3r31399.  
$ pip install imutils 3r31338. 3r31399.  
3r31085. Project structure 3r31086. 3r31399.  
3r31399.  
3r31291. 3r3r1292. Fig. 2. The project structure includes two directories, a configuration file and a Python script
3r31294. 3r31399.  
3r31399.  
There are two directories in our project: 3r3r1399.  
3r31399.  
3r31369.  
3r31383. images / : examples of input images for which we want to make an animated GIF. I found some images with me, but feel free to add my own.  
3r31383. assets /3r31346. : this folder contains our face detector, face face detector and all images + associated masks. With these assets we will put points and text on the original images from the first folder.  
3r31399.  
Due to the large number of customizable parameters, I decided to create a JSON configuration file that: 1) makes editing parameters easier; 2) will require fewer command line arguments. All configuration parameters required for this project are contained in 3r31345. config.json
. 3r31399.  
3r31399.  
Consider the contents of r3r31345. config.json and create_gif.py . 3r31399.  
3r31399.  
source code , manual ). 3r31293. 3r31399.  
3r31399.  
3r31085. GIF generation with OpenCV 3r31399.  
So, let's go ahead and start creating our OpenCV GIF generator! 3r31399.  
3r31399.  
3r? 31303. The contents of the JSON 3r3-331304 configuration file. 3r31399.  
Let's start with the JSON configuration file, and then move on to the Python script. 3r31399.  
3r31399.  
Open the new file 3r31345. config.json and insert the following key /value pairs:
 
3r31399.  
Creating gifs with OpenCVPython
 
3r31399.  
  3r3-1055. {
"face_detector_prototxt": "assets /deploy.prototxt",
"face_detector_weights": "assets /res10_300x300_ssd_iter_140000.caffemodel",
"landmark_predictor": "assets /shape_predictor_68_face_landmarks.dat", 3r31338. 3r31399.  
These are files of model 3r3r7272. OpenCV face detector on deep learning 3r31348. . 3r31399.  
3r31399.  
The last line is the path to the predictor dlib. 3r31399.  
3r31399.  
And now we have some paths to the image files:
 
3r31399.  
  3r3-1055. "sunglasses": "assets /sunglasses.png",
"sunglasses_mask": "assets /sunglasses_mask.png",
"deal_with_it": "assets /deal_with_it.png",
"deal_with_it_mask": "assets /deal_with_it_mask.png", 3r31338. 3r31399.  
These are the paths to our sunglasses, the text and the corresponding masks for them, which are shown below. 3r31399.  
3r31399.  
Firstly, fancy sunglasses and a mask:
 
3r31399.  
3r33382. 3r33333.
3r31399.  
3r31291. 3r3r1292. Fig. 3. You do not like glasses with pixels? Just accept 3r31294. 3r31399.  
3r31399.  
3r33382. 3r33333. 3r31399.  
3r31291. 3r3r1292. Fig. 4. You do not understand why you need a mask for sunglasses? Just put up with it - or read the rest of the article to find out the answer 3r31293. 3r31294. 3r31399.  
3r31399.  
And now our text is “DEAL WITH IT” and the mask:
 
3r31399.  
3r33382. 3r31399.  
3r31291. 3r3r1292. Fig. 5. Do you hate Helvetica Neue Condensed? Get over this 3r31294. 3r31399.  
3r31399.  
3r33382. 3r33383. 3r31399.  
3r31291. 3r3r1292. Fig. 6: This mask allows you to make a border around the text. Oh, and maybe you do not need do not want a border? Well, put up with this 3r31294. 3r31399.  
3r31399.  
Masks are needed in order to superimpose the corresponding image on the photo: we'll deal with this later. 3r31399.  
3r31399.  
Now let's set some parameters for the memes generator: 3r3r131399.  
3r31399.  
  3r3-1055. "min_confidence": 0.? 3r3-31406. "steps": 2?
"delay": ?
"final_delay": 25?
"loop": ?
"temp_dir": "temp"
} 3r31338. 3r31399.  
Here are the definitions for each of the parameters: 3r3r1399.  
3r31399.  
3r31369.  
3r31383. min_confidence : minimum required face detection probability.  
3r31383. steps : number of frames in the final animation. Each "step" moves the sunglasses from the top border down to the goal (i.e., to the eyes).  
3r31383. delay : delay between frames in hundredths of a second.  
3r31383. final_delay : last frame delay in hundredths of a second (useful in this context, since we want the text to appear longer than other frames).  
3r31383. loop : a zero value indicates that the GIF repeats forever, otherwise specify a positive integer for the number of repetitions of the animation.  
3r31383. temp_dir : the temporary directory in which each of the frames is stored will be before creating the final GIF.  
3r31399.  
3r? 31303. Memes, GIF and OpenCV 3r3-331304. 3r31399.  
We have created a JSON configuration file, now let's move to the real code. 3r31399.  
3r31399.  
Open a new file, name it create_gif.py and paste the following code: 3r3r1399.  
3r31399.  
  3r3-1055. # import required packages
from imutils import face_utils
from imutils import paths
import numpy as np
import argparse
import imutils
import shutil
import json
import dlib
import cv2
import sys
import os 3r31338. 3r31399.  
Here we import the necessary packages. In particular, we will use imutils, dlib and OpenCV. To install these dependencies, see the “Required Components and Dependencies” section above. 3r31399.  
3r31399.  
Now the script has the necessary packages, so we define the function overlay_image :  
3r31399.  
  3r3-1055. def overlay_image (bg, fg, fgMask, coords):
# define foreground size (width, height) and
# coordinates of its location
(sH, sW) = fg.shape[:2]
(x, y) = coords
The overlay must be exactly as wide and high as
# original picture, but completely empty, * except * front
# plan we add
overlay = np.zeros (bg.shape, dtype = "uint8")
overlay[y:y + sH, x:x + sW]= fg
# alpha channel controls, * coordinates * and * degree *
# transparency, its dimensions are the same as the original
# images, but it only contains the overlay mask
alpha = np.zeros (bg.shape[:2], dtype = "uint8")
alpha[y:y + sH, x:x + sW]= fgMask
alpha = np.dstack ([alpha]* 3)
# perform alpha blending for the foreground,
# background and alpha channel
output = alpha_blend (overlay, bg, alpha)
# return the result
Return output 3r31338. 3r31399.  
Function overlay_image superimposes the foreground ( fg ) to the top of the background image ( bg 3r3-334646.) along the coordinates coords (coordinates (x, y) ), implementing alpha transparency on the foreground mask 3r3r131345. fgMask . 3r31399.  
3r31399.  
To familiarize yourself with the basics of OpenCV, such as working with masks, be sure to read 3r-3541. This manual . 3r31399.  
3r31399.  
To complete the blending process, perform alpha blending:
 
3r31399.  
  3r3-1055. def alpha_blend (fg, bg, alpha):
# convert the background, foreground and alpha channel
# to floating-point numbers in the[0, 1]range.
fg = fg.astype ("float")
bg = bg.astype ("float")
alpha = alpha.astype ("float") /255
# perform alpha blending
fg = cv2.multiply (alpha, fg)
bg = cv2.multiply (1 - alpha, bg)
# add foreground and background, getting the final result
output = cv2.add (fg, bg)
# return the result
return output.astype ("uint8") 3r31338. 3r31399.  
This implementation of alpha blending is also given in 3r33572. blog LearnOpenCV . 3r31399.  
3r31399.  
Essentially, we convert the foreground, background, and alpha channel into floating point numbers in the range [0, 1]3r31294. . Then we perform alpha blending, add foreground and background to get the result, which we return to the calling function. 3r31399.  
3r31399.  
We will also create an auxiliary function that will allow generating GIFs from the set of image paths using ImageMagick and the command 3r3131345. convert :  
3r31399.  
  3r3-1055. def create_gif (inputPath, outputPath, delay, finalDelay, loop):
# get all the paths from the source images folder
imagePaths = sorted (list (paths.list_images (inputPath)))
# delete the last path in the list
lastPath = imagePaths[-1]
imagePaths = imagePaths[:-1]
# construct the imagemagick 'convert' command for
# generate a GIF with a longer delay for
# of the last frame (if necessary)
cmd = "convert -delay {} {} -delay {} {} -loop {}}} os.system (cmd) 3r31338. 3r31399.  
Function create_gif takes a set of images and collects them into a GIF-animation with a specified delay between frames and cycles. All of this handles ImageMagick — we just wrap the command convert into a function that dynamically processes various parameters. 3r31399.  
3r31399.  
To view the available arguments convert , 3r320. refer to the 3r31348 documentation. . There you will see how many features this command has! 3r31399.  
3r31399.  
Specifically, in this function, we:
 
3r31399.  
3r31369.  
3r31383. We take 3r313345. imagePaths .  
3r31383. Select the path of the last image, which will be a separate delay.  
3r31383. Reassign imagePaths , to exclude the last path.  
3r31383. We put together a command with command line arguments, and then instruct the operating system to execute 3r3131345. convert to create gif animations.  
3r31399.  
Assign the script your own command line arguments:
 
3r31399.  
  3r3-1055. # construct a parser and parse the arguments
ap = argparse.ArgumentParser ()
ap.add_argument ("- c", "--config", required = True,
help = "path to configuration file")
ap.add_argument ("- i", "--image", required = True,
help = "path to input image")
ap.add_argument ("- o", "--output", required = True,
help = "path to output GIF")
args = vars (ap.parse_args ()) 3r31338. 3r31399.  
We have three command line arguments that are processed at run time: 3r3131399.  
3r31399.  
3r31369.  
3r31383. --config : path to the JSON configuration file. We covered the configuration file in the previous section.  
3r31383. --image : The path to the input image, against which the animation is created (ie, face detection, adding sunglasses, and then text).  
3r31383. --output : the path to the final GIF.  
3r31399.  
Each of these arguments is required when running the script on the command line /terminal. 3r31399.  
3r31399.  
Download the configuration file, as well as the glasses and the corresponding mask:
 
3r31399.  
  3r3-1055. # load JSON configuration file,
# glasses and the corresponding mask
config = json.loads (open (args["config"]). read ())
sg = cv2.imread (config["sunglasses"])
sgMask = cv2.imread (config["sunglasses_mask"])
# delete the temporary folder (if it exists), and then
# create a new, empty folder where we will save each
# separate frame GIF-animation
shutil.rmtree (config["temp_dir"], ignore_errors = True)
os.makedirs (config["temp_dir"]) 3r31338. 3r31399.  
Here we load the configuration file (which may in the future be available as a Python dictionary). Then we load sunglasses and a mask. 3r31399.  
3r31399.  
If something remains from the previous script, delete the temporary directory, and then recreate the empty temporary directory. The temporary folder will contain each individual frame from the GIF animation. 3r31399.  
3r31399.  
Now we will load into memory OpenCV Deep Learning Face Detector. :  
3r31399.  
  3r3-1055. # load our OpenCV face detector and dlib facial landmark predictor
print ("[INFO]loading models ")
detector = cv2.dnn.readNetFromCaffe (config["face_detector_prototxt"],
config["face_detector_weights"])
predictor = dlib.shape_predictor (config["landmark_predictor"]) 3r31338. 3r31399.  
To do this, call cv2.dnn.readNetFromCaffe . Module dnn Available only in OpenCV 3.3 or later. A face detector will detect the presence of faces in the image: 3r31399.  
3r31399.  
3r3751. 3r31399.  
3r31291. 3r3r1292. Fig. 7. The work of the face detector using OpenCV DNN 3r31293. 3r31294. 3r31399.  
3r31399.  
Then we load The predictor of landmarks is dlib . It will allow you to localize individual structures: eyes, eyebrows, nose, mouth and chin line:
 
3r31399.  
3r33737. 3r31399.  
3r31291. 3r3r1292. Fig. 8. There are landmarks on my face, found by dlib 3r31294. 3r31399.  
3r31399.  
Later in this script we extract only the eye area. 3r31399.  
3r31399.  
Moving on, let's find the face:
 
3r31399.  
  3r3-1055. # load the original image and create a blob
image = cv2.imread (args["image"])
(H, W) = image.shape[:2]
blob = cv2.dnn.blobFromImage (cv2.resize (image, (30? 300)), 1.?
(30? 300), (104.? 177.? 123.0)) 3r331406.
# pass the blob to the neural network and get the results of
print ("[INFO]computing object detections ") 3r3-31406. detector.setInput (blob)
detections = detector.forward ()
# Only one person is needed to apply points, therefore
# determine the person for whom the maximum probability of
is given. i = np.argmax (detections[0, 0, :, 2]) 3r3-31406. confidence = detections[0, 0, i, 2]
# filter out weak results
if confidence 3r31338. 3r31399.  
In this block we do the following:
 
3r31399.  
3r31369.  
3r31383. Download the original image .  
3r31383. We construct blob for sending to the neural network face detector. In 3r3826. this article 3r31348. describes how r3r31345 works. blobFromImage from opencv.  
3r31383. Perform face detection procedure.  
3r31383. We find the person with the highest probability value and compare it with the minimum acceptable threshold. If the criteria are not met, simply exit the script. Otherwise, we continue.  
3r31399.  
Now extract the face and calculate the landmarks:
 
3r31399.  
  3r3-1055. # we calculate the coordinates (x, y) of the restrictive
# frames on the face
box = detections[0, 0, i, 3:7]* np.array ([W, H, W, H])
(startX, startY, endX, endY) = box.astype ("int")
# construct a rectangular dlib object from the coordinates of the restrictive
# frames and define landmarks inside it
rect = dlib.rectangle (int (startX), int (startY), int (endX), int (endY))
shape = predictor (image, rect)
shape = face_utils.shape_to_np (shape)
# take indexes of landmarks for the left and right eyes, then
# calculate the coordinates of each eye
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
leftEyePts = shape[lStart:lEnd]
rightEyePts = shape[rStart:rEnd] 3r31338. 3r31399.  
To extract the face and find the facial reference points, we do the following: 3r3131399.  
3r31399.  
3r31369.  
3r31383. Retrieve the coordinates of the bounding box around the face.  
3r31383. Create an object rectangle in dlib and apply localization of landmarks of the face.  
3r31383. Remove the [i] (x, y)
- coordinates leftEyePts and rightEyePts , respectively.  
3r31399.  
Given the coordinates of the eyes, you can calculate where and how to place your sunglasses:
 
3r31399.  
  3r3-1055. # calculate the center of mass for each eye
leftEyeCenter = leftEyePts.mean (axis = 0) .astype ("int")
rightEyeCenter = rightEyePts.mean (axis = 0) .astype ("int")
# calculate the angle between the eye centroids
dY = rightEyeCenter[1]- leftEyeCenter[1]
dX = rightEyeCenter[0]- leftEyeCenter[0]
angle = np.degrees (np.arctan2 (dY, dX)) - 180
# rotate the image of the glasses on the calculated angle to
# turn points corresponded to head tilt
sg = imutils.rotate_bound (sg, angle) 3r331406. 3r3-31406. # glasses should not cover * the full * width of the face, and ideally
# eyes only - we make a rough estimate here and indicate
# 90% face width as eyeglass width
sgW = int ((endX - startX) * 0.9)
sg = imutils.resize (sg, width = sgW)
# The glasses have transparent parts (lower part, under the lenses
# and the nose), therefore, to get an acceptable result,
# you need to apply mask and alpha blending - here we are
# perform binarization of the mask with the same treatment,
# Like points above
sgMask = cv2.cvtColor (sgMask, cv2.COLOR_BGR2GRAY)
sgMask = cv2.threshold (sgMask, ? 25? cv2.THRESH_BINARY)[1]
sgMask = imutils.rotate_bound (sgMask, angle)
sgMask = imutils.resize (sgMask, width = sgW, inter = cv2.INTER_NEAREST) ​​ 3r31338. 3r31399.  
First, we calculate the center of each eye, then the angle between the centroids. The same operation is performed at a horizontal aligning faces in frame 3r31348. . 3r31399.  
3r31399.  
Now you can rotate and resize points. Please note that we use
rotate_bound function , not just rotate so that OpenCV does not cut parts that are not visible after affine transformation. 3r31399.  
3r31399.  
The same operations that were applied to the glasses, apply to the mask. But first you need to convert it to shades of gray and binarize, because the masks are always binary. Then we rotate and resize the mask in the same way as we did with the glasses. 3r31399.  
3r31399.  
3r31291. [b] Note: 3r350. Please note that when changing the size of the mask, we use interpolation on the nearest neighboring points, because the mask must have only two values ​​(0 and 255). Other interpolation methods are more aesthetic, but not suitable for masks. 3r3951. Here is You can get more information about interpolation by the nearest neighboring points. 3r31294. 3r31399.  
3r31399.  
The remaining three blocks of code create frames for GIF animation: 3r3131399.  
3r31399.  
  3r3-1055. # Points fall from the top of the frame, so
# define N equal intervals between the top edge of the frame
# and end position
steps = np.linspace (? rightEyeCenter[1], config["steps"],
dtype = "int")
# start looping over the steps
for (i, y) in enumerate (steps):
# calculate the values ​​of the small left shift
# and up, because points * start * not right at
# center of the eye, and this offset allows covering the entire
# required area
shiftX = int (sg.shape[1]* ???)
shiftY = int (sg.shape[0]* ???) 3r3-31406. y = max (? y - shiftY) 3r3-31406.
# add the sunglasses to the image
output = overlay_image (image, sg, sgMask,
(rightEyeCenter[0]- shiftX, y)) 3r31338. 3r31399.  
Points fall from the top of the image. On each frame, they are displayed closer to the face until they cover their eyes. Using the variable "steps" In the JSON configuration file, we generate y-coordinates for each frame. To do this, effortlessly use the function 3r3r131345. linspace from numpy. 3r31399.  
3r31399.  
The lines with a slight shift to the left and up may seem a bit strange, but they are needed to ensure that the glasses cover the eyes entirely, and not just move to the point where the center of the eye is located. I empirically determined the percentage values ​​to calculate the shift along each axis. The next line provides the absence of negative values. 3r31399.  
3r31399.  
Using the function 3r313345. overlay_image generate the final frame 3r3131345. output . 3r31399.  
3r31399.  
Now apply the text “DEAL WITH IT” using another mask: 3r3131399.  
3r31399.  
  3r3-1055. # if this is the last step, now add
# text "DEAL WITH IT" at the bottom of the frame
if i == len (steps) - 1: 3r3-31406. # load the picture "DEAL WITH IT" and the mask,
# check binarization
dwi = cv2.imread (config["deal_with_it"])
dwiMask = cv2.imread (config["deal_with_it_mask"])
dwiMask = cv2.cvtColor (dwiMask, cv2.COLOR_BGR2GRAY)
dwiMask = cv2.threshold (dwiMask, ? 25?
cv2.THRESH_BINARY)[1]
# change the size of the text and mask to 80% of the width of the final
# images
oW = int (W * 0.8)
dwi = imutils.resize (dwi, width = oW)
dwiMask = imutils.resize (dwiMask, width = oW,
inter = cv2.INTER_NEAREST) ​​
# we calculate the coordinates where to place the text, and
# add it
oX = int (W * 0.1)
oY = int (H * 0.8)
output = overlay_image (output, dwi, ​​dwiMask, (oX, oY)) 3r31338. 3r31399.  
In the last step, we superimpose the text, which in reality is another image. 3r31399.  
3r31399.  
I decided to use an image because the rendering capabilities of OpenCV fonts are rather limited. In addition, I wanted to add a shadow and border around the text, which, again, OpenCV does not know how. 3r31399.  
3r31399.  
In the rest of this code, we load both the image and the mask, and then perform alpha blending to generate the final result. 3r31399.  
3r31399.  
It remains only to save each frame to disk and then create a GIF animation: 3r3131399.  
3r31399.  
  3r3-1055. # write the result to a temporary folder
p = os.path.sep.join ([config["temp_dir"], "{} .jpg" .format (
str (i) .zfill (8))])
cv2.imwrite (p, output)
# all files are already written to disk, so you can start
# to generate GIF animations
print ("[INFO]creating gif ") 3r3-331406. create_gif (config["temp_dir"], args["output"], config["delay"], 3r3-331406. config["final_delay"], config["loop"])
# cleaning - delete the temporary folder
print ("[INFO]cleaning up ")
shutil.rmtree (config["temp_dir"], ignore_errors = True) 3r3r1346. 3r31338. 3r31399.  
Write the result to disk. After generating all the frames, we call the function 3r31345. create_gif to create a GIF animation file. Remember that this is a shell passing parameters to the ImageMagick command-line tool. convert . 3r31399.  
3r31399.  
Finally, delete the temporary output directory and the individual image files. 3r31399.  
3r31399.  
3r31085. Results 3r31399.  
Now the most interesting: let's see what our meme generator has created! 3r31399.  
3r31399.  
Make sure downloaded source code, sample images, and deep learning models. Then open a terminal and execute the following command: 3r3r1399.  
3r31399.  
    $ python create_gif.py --config config.json --image images /adrian.jpg
--output adrian_out.gif
[INFO]loading models
[INFO]computing object detections
[INFO]creating GIF
[INFO]cleaning up
3r31338. 3r31399.  
3r31110. 3r31399.  
3r31291. 3r3r1292. Figure 9. GIF animation generated with OpenCV and ImageMagick with this Python script 3r31294. 3r31399.  
3r31399.  
Here you can see a GIF created using OpenCV and ImageMagick. The following actions are performed on it: 3r3r131399.  
3r31399.  
3r3-131125.  
3r31383. The correct detection of my face.  
3r31383. Localization of the eyes and the calculation of their centers.  
3r31383. Points correctly fall on the face.  
3r31136. 3r31399.  
Readers of my blog know that I am a big nerd in the “Jurassic Park” and often mention it in my books, courses and textbooks. 3r31399.  
3r31399.  
I do not like Jurassic Park ? 3r31399.  
3r31399.  
Well, here is my answer: 3r3r1399.  
3r31399.  
    $ python create_gif.py --config config.json --image images /adrian_jp.jpg
--output adrian_jp_out.gif
[INFO]loading models
[INFO]computing object detections
[INFO]creating GIF
[INFO]cleaning up
3r31338. 3r31399.  
3r3r1164. 3r31399.  
3r31291. 3r3r1292. Fig. 10. OpenCV GIF-animation based on a photo from a recent screening of the film “Jurassic World 2” 3r31293. 3r31294. 3r31399.  
3r31399.  
Here I am at the show "The World of the Jurassic Period: 2" in a thematic shirt, with a glass of light and collection book. 3r31399.  
3r31399.  
Merry story:
 
3r31399.  
Five or six years ago, my wife and I visited the Epcot Center theme park in Walt Disney World, Florida. 3r31399.  
3r31399.  
We decided to go on a journey to get away from the harsh winter in Connecticut, and desperately needed sunlight. 3r31399.  
3r31399.  
Unfortunately, it rained all the time in Florida, and the temperature barely exceeded 10 ° C. 3r3-33999.  
3r31399.  
Near the Canadian Gardens, Trisha took a picture of me: she says that I look like a vampire with pale skin, dark clothes and a hood, against the background of lush gardens behind:
 
3r31399.  
    $ python create_gif.py --config config.json --image images /vampire.jpg
--output vampire_out.gif
[INFO]loading models
[INFO]computing object detections
[INFO]creating GIF
[INFO]cleaning up
3r31338. 3r31399.  
3r31399.  
3r31291. 3r3r1292. Fig. 11. With OpenCV and Python, you can make this meme or other animated GIF 3r31294. 3r31399.  
3r31399.  
That evening, Trisha published a photo in social networks - I had to put up with it. 3r31399.  
3r31399.  
Those of you who attended PyImageConf 2018 ( Read the review r3r31348.) Know that I am always open for jokes. Here's an example: 3r31399.  
3r31399.  
3r31291. Question: Why does the rooster cross the road? 3r31294. 3r31399.  
3r31399.  
    $ python create_gif.py --config config.json --image images /rooster.jpg
--output rooster_out.gif
[INFO]loading models
[INFO]computing object detections
[INFO]creating GIF
[INFO]cleaning up
3r31338. 3r31399.  
3r31248. 3r31399.  
3r31291. 3r3r1292. Fig. 12. The face is recognized even at low contrast, and OpenCV correctly processes the photo and lowers the sunglasses 3r31293. 3r31294. 3r31399.  
3r31399.  
3r31291. Answer: I will not say the answer - accept this. 3r31294. 3r31399.  
3r31399.  
Finally, we conclude today's guide with a good meme. 3r31399.  
3r31399.  
About six years ago, my father and I adopted a little beagle, Gemma. 3r31399.  
3r31399.  
Here you can see Gemma on my shoulder:
 
3r31399.  
    $ python create_gif.py --config config.json --image images /pupper.jpg
--output pupper_out.gif
[INFO]loading models
[INFO]computing object detections
[INFO]creating GIF
[INFO]cleaning up
3r31338. 3r31399.  
3r31288. 3r31399.  
3r31291. 3r3r1292. Fig. 13. Gemma is amazing. Do not you think so? Then “put up with it”! 3r31293. 3r31294. 3r31399.  
3r31399.  
Do not agree that she is cute? Get over it. 3r31399.  
3r31399.  
3r? 31303. An AttributeError error occurred? 3r? 31304. 3r31399.  
Do not worry! 3r31399.  
3r31399.  
If you see this error: 3r31399.  
3r31399.  
    $ python create_gif.py --config config.json --image images /adrian.jpg
--output adrian_out.gif
Traceback (most recent call last):
File "create_gif.py", line 14? in
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
AttributeError: module 'imutils.face_utils' has no attribute 'FACIAL_LANDMARKS_IDXS'
3r31338. 3r31399.  
then you just need to update the imutils package:
 
3r31399.  
    $ pip install --upgrade imutils
Collecting imutils
Successfully installed imutils-???
3r31338. 3r31399.  
Why? 3r31399.  
3r31399.  
The default is imutils.face_utils uses a 68-point landmark detector built into dlib (as in this article). There are
faster 5-point detector
which now also works with imutils. I recently updated imutils to support both detectors (so you may see an error). 3r31399.  
3r31399.  

Summary

3r31399.  
In today's tutorial, you learned how to create a GIF using OpenCV. 3r31399.  
3r31399.  
To make the lesson fun, we used OpenCV to generate GIF-animation “Deal With It”, a popular meme (and my favorite), which in one form or another is found on almost every social networking site. 3r31399.  
3r31399.  
In the process, we used computer vision and deep learning to solve several practical problems: 3r31399.  
3r31399.  
3r31369.  
3r31383. Identification of individuals
 
3r31383. Landmarks forecast for the person
 
3r31383. Identifying areas of the face (in this case, the eye)
 
3r31383. Calculating the angle between the eyes to align the face 3r3r131384.  
3r31383. Creating transparent overlays using alpha blending
 
3r31399.  
Finally, we took a set of generated images and created an animated GIF using OpenCV and ImageMagick. 3r31399.  
3r31399.  
I hope you enjoyed today's lesson! 3r31399.  
3r31399.  
If you like it, please leave a comment and let me know. 3r31399.  
3r31399.  
Well, if you don't like it, it doesn't matter, just accept this. ;) 3r3-3407.
+ 0 -

Add comment