戴口罩情境下的人脸识别demo( 二 )


(2)构建P-net代码如下:
# -----------------------------##mtcnn的第二段#精修框# -----------------------------#def create_Rnet(weight_path):input = Input(shape=[24, 24, 3])# 24,24,3 -> 11,11,28x = Conv2D(28, (3, 3), strides=1, padding='valid', name='conv1')(input)x = PReLU(shared_axes=[1, 2], name='prelu1')(x)x = MaxPool2D(pool_size=3, strides=2, padding='same')(x)# 11,11,28 -> 4,4,48x = Conv2D(48, (3, 3), strides=1, padding='valid', name='conv2')(x)x = PReLU(shared_axes=[1, 2], name='prelu2')(x)x = MaxPool2D(pool_size=3, strides=2)(x)# 4,4,48 -> 3,3,64x = Conv2D(64, (2, 2), strides=1, padding='valid', name='conv3')(x)x = PReLU(shared_axes=[1, 2], name='prelu3')(x)# 3,3,64 -> 64,3,3x = Permute((3, 2, 1))(x)x = Flatten()(x)# 576 -> 128x = Dense(128, name='conv4')(x)x = PReLU(name='prelu4')(x)# 128 -> 2 128 -> 4classifier = Dense(2, activation='softmax', name='conv5-1')(x)bbox_regress = Dense(4, name='conv5-2')(x)model = Model([input], [classifier, bbox_regress])model.load_weights(weight_path, by_name=True)return model
(3)构建O-net代码如下:
【戴口罩情境下的人脸识别demo】# -----------------------------##mtcnn的第三段#精修框并获得五个点# -----------------------------#def create_Onet(weight_path):input = Input(shape=[48, 48, 3])# 48,48,3 -> 23,23,32x = Conv2D(32, (3, 3), strides=1, padding='valid', name='conv1')(input)x = PReLU(shared_axes=[1, 2], name='prelu1')(x)x = MaxPool2D(pool_size=3, strides=2, padding='same')(x)# 23,23,32 -> 10,10,64x = Conv2D(64, (3, 3), strides=1, padding='valid', name='conv2')(x)x = PReLU(shared_axes=[1, 2], name='prelu2')(x)x = MaxPool2D(pool_size=3, strides=2)(x)# 8,8,64 -> 4,4,64x = Conv2D(64, (3, 3), strides=1, padding='valid', name='conv3')(x)x = PReLU(shared_axes=[1, 2], name='prelu3')(x)x = MaxPool2D(pool_size=2)(x)# 4,4,64 -> 3,3,128x = Conv2D(128, (2, 2), strides=1, padding='valid', name='conv4')(x)x = PReLU(shared_axes=[1, 2], name='prelu4')(x)# 3,3,128 -> 128,3,3x = Permute((3, 2, 1))(x)# 1152 -> 256x = Flatten()(x)x = Dense(256, name='conv5')(x)x = PReLU(name='prelu5')(x)# 鉴别# 256 -> 2 256 -> 4 256 -> 10classifier = Dense(2, activation='softmax', name='conv6-1')(x)bbox_regress = Dense(4, name='conv6-2')(x)landmark_regress = Dense(10, name='conv6-3')(x)model = Model([input], [classifier, bbox_regress, landmark_regress])model.load_weights(weight_path, by_name=True)return model
2.检测人脸特征点
分述:
此任务模块,主要调用dlib开源库来提取128个特征点,输入的戴口罩人脸图片,对于鼻子、嘴两个部位,在口罩的遮挡下,该模型自动补充特征点进行人脸特征点提取;
代码如下:
self.sp = dlib.shape_predictor('data/shape_predictor_128_face_landmarks.dat')
3.人脸编码
分述:获取脸部128个点的特征编码,基于人脸编码信息矩阵,计算不同人脸之间的距离计算;
代码如下:

戴口罩情境下的人脸识别demo

文章插图
def _get_img_face_encoding(self, fpath):"""获取路径图片的脸部特征编码:param fpath: 图片路径:return: 128位的向量"""img_x = cv2.imread(fpath)img_x = cv2.cvtColor(img_x, cv2.COLOR_BGR2RGB)item = self.face_detector.detect(img_x)assert item is not None, 'can not find the face box,please check %s' % fpathbox, _ = itemreturn self._get_face_feat(img_x, box)
def _face_distance(self, face_encodings, face_to_compare):"""基于人脸编码信息矩阵,比较不同人脸之间的距离"""if len(face_encodings) == 0:return np.empty((0))face_encodings = np.asarray(face_encodings)return np.linalg.norm(face_encodings - face_to_compare, axis=1)