Three ways for face detection

Face detection is one of the most popurlay field in computer vision. Recently I make some demos for face detection. Three ways has been test, python-opencv face++ API MTCNN

What’s face detection

Face detection is not a hard way these days with deeplearning develop fast. Many compony like Face++, Apple, Google, Baidu have powerful face detection algorithm. Not only in academia, face detection is familiar with normal people. If you have a iphone X, you use face detection each day.
With face detection, many higher level task like gender/age analysis, smile intensity detection can be accessed. Here is a video from AdMobilize.

Use opencv-python

I suppose you know how to use python in this chapter. If not, see the next chapter.
Opencv-python is a powerful image processing library.

pip install opencv-python

After install, you can use cv2 library to design many demos. In opencv, face detector with Haar feature have been built-in.
Download haarcascade_frontalface_alt.xml first.

import cv2

img_test = cv2.imread('./face_test.jpg', cv2.IMREAD_GRAYSCALE)
faces = face_detecor.detectMultiScale(
                minSize=(64, 64))

# show
color = (0,255,0)  
img_show = cv2.imread('./face_test.jpg',cv2.IMREAD_COLOR)
if len(faces)>0:
    for faceRect in faces: 
        x, y, w, h = faceRect
        cv2.rectangle(img_show, (x, y), (x+w, y+h), color) 
cv2.imshow('face test image', img_show)
key = cv2.waitKey(0)
if key == '27':

You will get a result like this. face1.png

Use Face++ API

Face++ is a China company. They are really powerful in these field. And appreciately they offer a free face detection API. Befor using, you should register on Face++.
Here is a demo using python3 (official demo is on python2).

import requests
from json import JSONDecoder
import datetime
import cv2

# read and show
img = cv2.imread('./face_test4.jpg', cv2.IMREAD_COLOR)
cv2.namedWindow("original image")
cv2.imshow("original image", img)

http_url = ''

# user info
key = xxx     # put your key here
secret = xxx  # put your secret here

# path
filepath = r"E:/Github/GAF/face_test4.jpg"
data = {"api_key": key, "api_secret": secret, "return_gesture": "1"}

files = {"image_file": open(filepath, "rb")}

starttime =
response =, data=data, files=files)
endtime =
print((endtime - starttime).seconds)

req_con = response.content.decode('utf-8')
req_dict = JSONDecoder().decode(req_con)

faces = req_dict["faces"]
faceNum = len(faces)
print("total %d faces" %(faceNum))

for i in range(faceNum):
    face_rectangle = faces[i]['face_rectangle']
    width = face_rectangle['width']
    top = face_rectangle['top']
    left = face_rectangle['left']
    height = face_rectangle['height']
    start = (left, top)
    end = (left+width, top+height)
    color = (55, 255, 155)
    thickness = 3
    cv2.rectangle(img, start, end, color, thickness)

cv2.namedWindow("Face Detection")
cv2.imshow("Face Detection", img)

You will get a result like this. FACE2.png


If you are a deep learning learner, maybe you prefer CNN. MTCNN is a very good paper for face detection and alignment. I only test face detection.

# coding: utf-8

import tensorflow as tf
import numpy as np
import cv2
import detect_face

# face detection parameters
minsize = 20  # minimum size of face
threshold = [0.6, 0.7, 0.7]  # three steps's threshold
factor = 0.709 # scale factor

def to_rgb(img):
  w, h = img.shape
  ret = np.empty((w, h, 3), dtype=np.uint8)
  ret[:, :, 0] = ret[:, :, 1] = ret[:, :, 2] = img
  return ret

print('Creating networks and loading parameters')
gpu_memory_fraction = 1.0
with tf.Graph().as_default():
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction)
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))
    with sess.as_default():
        pnet, rnet, onet = detect_face.create_mtcnn(sess, './model_check_point/')

## face detect

frame = cv2.imread('../face_test_img/face_test3.jpg')
find_results = []
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

if gray.ndim == 2:
    img = to_rgb(gray)

bounding_boxes, _ = detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor)

nrof_faces = bounding_boxes.shape[0]  # number of faces
print('Total faces:{}'.format(nrof_faces))

for face_position in bounding_boxes:
    face_position = face_position.astype(int)

    cv2.rectangle(frame, (face_position[0], face_position[1]),
                  (face_position[2], face_position[3]),  (0, 255, 0), 2)

    # crop = img[face_position[1]:face_position[3],face_position[0]:face_position[2],]

cv2.imshow('face_test', frame)
key = cv2.waitKey(0)
if key == '27':

You will get a result like this. face4.png

That’s all, hope this help you. :smiling_imp: :smiling_imp: :smiling_imp:


Real Time Film-Lead Face Identify

AboutSince I love Friends of six so much, I decide to make a demo for identifying their faces in the video. BTW, the demo is naive, you can make more effort on this for a better result. And real time means on a good GPU rather than a bad PC, since...…

Face Detection继续阅读

How to build Github Pages with jeklly

This is a tutorial for building Github Pages with jeklly. Since I am also a fresher, it will be appreciate to email me if you have better ideas.Download code hereBegin with Github PagesGithub Pages is a light weight blog systerm supproted by Githu...…