如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)

本篇文章給大家分享的是有關(guān)如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng),小編覺(jué)得挺實(shí)用的,因此分享給大家學(xué)習(xí),希望大家閱讀完這篇文章后可以有所收獲,話(huà)不多說(shuō),跟著小編一起來(lái)看看吧。

創(chuàng)新互聯(lián)公司主要從事成都網(wǎng)站建設(shè)、成都做網(wǎng)站、網(wǎng)頁(yè)設(shè)計(jì)、企業(yè)做網(wǎng)站、公司建網(wǎng)站等業(yè)務(wù)。立足成都服務(wù)崇明,10年網(wǎng)站建設(shè)經(jīng)驗(yàn),價(jià)格優(yōu)惠、服務(wù)專(zhuān)業(yè),歡迎來(lái)電咨詢(xún)建站服務(wù):18980820575

一、概述

   人臉識(shí)別本質(zhì)上是一個(gè)求相似度的問(wèn)題,相同的人臉映射到同一個(gè)空間,他們的距離比較近,這個(gè)距離的度量可以是余弦距離,也可以是歐幾里得距離,或者其他的距離。下面有三個(gè)頭像。

         如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)         如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)     如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)

                    A                                          B                                     C

   顯然A和C是相同人臉,A和B是不同人臉,用數(shù)學(xué)怎么描述呢?假設(shè)有個(gè)距離函數(shù)d(x1,x2),那么 d(A,B) > d(A,C)。在真實(shí)的人臉識(shí)別應(yīng)用中,函數(shù)d(x1,x2)小到一個(gè)什么范圍才認(rèn)定為同一張人臉呢?這個(gè)值和訓(xùn)練模型時(shí)的參數(shù)有關(guān),這個(gè)將在下文中給出。值得注意的是,如果函數(shù)d為cosine,則值越大表示越相似。一個(gè)通用的人臉識(shí)別模型應(yīng)該包含特征提?。ㄒ簿褪翘卣饔成洌┖途嚯x計(jì)算兩個(gè)單元。

二、構(gòu)造模型

    那么有什么辦法可以特征映射呢?對(duì)于圖像的處理,卷積神經(jīng)網(wǎng)絡(luò)無(wú)疑是目前最優(yōu)的辦法。DeepLearning4J已經(jīng)內(nèi)置了訓(xùn)練好的VggFace模型,是基于vgg16訓(xùn)練的。vggFace的下載地址:https://dl4jdata.blob.core.windows.net/models/vgg16_dl4j_vggface_inference.v1.zip,這個(gè)地址是怎么獲取到的呢?直接跟一下源碼VGG16,pretrainedUrl方法里的DL4JResources.getURLString方法便有相關(guān)模型的下載地址,VGG19、ResNet50等等pretrained的模型下載地址,都可以這樣找到。源碼如下

public class VGG16 extends ZooModel {

    @Builder.Default private long seed = 1234;
    @Builder.Default private int[] inputShape = new int[] {3, 224, 224};
    @Builder.Default private int numClasses = 0;
    @Builder.Default private IUpdater updater = new Nesterovs();
    @Builder.Default private CacheMode cacheMode = CacheMode.NONE;
    @Builder.Default private WorkspaceMode workspaceMode = WorkspaceMode.ENABLED;
    @Builder.Default private ConvolutionLayer.AlgoMode cudnnAlgoMode = ConvolutionLayer.AlgoMode.PREFER_FASTEST;

    private VGG16() {}

    @Override
    public String pretrainedUrl(PretrainedType pretrainedType) {
        if (pretrainedType == PretrainedType.IMAGENET)
            return DL4JResources.getURLString("models/vgg16_dl4j_inference.zip");
        else if (pretrainedType == PretrainedType.CIFAR10)
            return DL4JResources.getURLString("models/vgg16_dl4j_cifar10_inference.v1.zip");
        else if (pretrainedType == PretrainedType.VGGFACE)
            return DL4JResources.getURLString("models/vgg16_dl4j_vggface_inference.v1.zip");
        else
            return null;
    }

    vgg16的模型結(jié)構(gòu)如下:

====================================================================================================
VertexName (VertexType)        nIn,nOut     TotalParams   ParamsShape                  Vertex Inputs
====================================================================================================
input_1 (InputVertex)          -,-          -             -                            -            
conv1_1 (ConvolutionLayer)     3,64         1,792         W:{64,3,3,3}, b:{1,64}       [input_1]    
conv1_2 (ConvolutionLayer)     64,64        36,928        W:{64,64,3,3}, b:{1,64}      [conv1_1]    
pool1 (SubsamplingLayer)       -,-          0             -                            [conv1_2]    
conv2_1 (ConvolutionLayer)     64,128       73,856        W:{128,64,3,3}, b:{1,128}    [pool1]      
conv2_2 (ConvolutionLayer)     128,128      147,584       W:{128,128,3,3}, b:{1,128}   [conv2_1]    
pool2 (SubsamplingLayer)       -,-          0             -                            [conv2_2]    
conv3_1 (ConvolutionLayer)     128,256      295,168       W:{256,128,3,3}, b:{1,256}   [pool2]      
conv3_2 (ConvolutionLayer)     256,256      590,080       W:{256,256,3,3}, b:{1,256}   [conv3_1]    
conv3_3 (ConvolutionLayer)     256,256      590,080       W:{256,256,3,3}, b:{1,256}   [conv3_2]    
pool3 (SubsamplingLayer)       -,-          0             -                            [conv3_3]    
conv4_1 (ConvolutionLayer)     256,512      1,180,160     W:{512,256,3,3}, b:{1,512}   [pool3]      
conv4_2 (ConvolutionLayer)     512,512      2,359,808     W:{512,512,3,3}, b:{1,512}   [conv4_1]    
conv4_3 (ConvolutionLayer)     512,512      2,359,808     W:{512,512,3,3}, b:{1,512}   [conv4_2]    
pool4 (SubsamplingLayer)       -,-          0             -                            [conv4_3]    
conv5_1 (ConvolutionLayer)     512,512      2,359,808     W:{512,512,3,3}, b:{1,512}   [pool4]      
conv5_2 (ConvolutionLayer)     512,512      2,359,808     W:{512,512,3,3}, b:{1,512}   [conv5_1]    
conv5_3 (ConvolutionLayer)     512,512      2,359,808     W:{512,512,3,3}, b:{1,512}   [conv5_2]    
pool5 (SubsamplingLayer)       -,-          0             -                            [conv5_3]    
flatten (PreprocessorVertex)   -,-          -             -                            [pool5]      
fc6 (DenseLayer)               25088,4096   102,764,544   W:{25088,4096}, b:{1,4096}   [flatten]    
fc7 (DenseLayer)               4096,4096    16,781,312    W:{4096,4096}, b:{1,4096}    [fc6]        
fc8 (DenseLayer)               4096,2622    10,742,334    W:{4096,2622}, b:{1,2622}    [fc7]        
----------------------------------------------------------------------------------------------------
            Total Parameters:  145,002,878
        Trainable Parameters:  145,002,878
           Frozen Parameters:  0

       對(duì)于VggFace我們只需要前面的卷積層和池化層來(lái)提取特征,其他的全連接層可以丟棄掉,那么我們的模型可以設(shè)置成如下的樣子。

如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)

    說(shuō)明:這里用StackVertex和UnStackVertex的原因是,dl4j中默認(rèn)情況下有都給輸入時(shí)是把張量Merge在一起輸入的,達(dá)不到多個(gè)輸入共享權(quán)重的目的,所以這里先用StackVertex沿著第0維堆疊張量,共享卷積和池化提取特征,再用UnStackVertex拆開(kāi)張量,給后面用于計(jì)算距離用。

    接下來(lái)的問(wèn)題是,dl4j中遷移學(xué)習(xí)api只能在模型尾部追加相關(guān)的結(jié)構(gòu),而現(xiàn)在我們的場(chǎng)景是把pretrained的模型的部分結(jié)構(gòu)放在中間,怎么辦呢?不著急,我們看看遷移學(xué)習(xí)API的源碼,看DL4J是怎么封裝的。在org.deeplearning4j.nn.transferlearning.TransferLearning的build方法中找到了蛛絲馬跡。

public ComputationGraph build() {
            initBuilderIfReq();

            ComputationGraphConfiguration newConfig = editedConfigBuilder
                    .validateOutputLayerConfig(validateOutputLayerConfig == null ? true : validateOutputLayerConfig).build();
            if (this.workspaceMode != null)
                newConfig.setTrainingWorkspaceMode(workspaceMode);
            ComputationGraph newGraph = new ComputationGraph(newConfig);
            newGraph.init();

            int[] topologicalOrder = newGraph.topologicalSortOrder();
            org.deeplearning4j.nn.graph.vertex.GraphVertex[] vertices = newGraph.getVertices();
            if (!editedVertices.isEmpty()) {
                //set params from orig graph as necessary to new graph
                for (int i = 0; i < topologicalOrder.length; i++) {

                    if (!vertices[topologicalOrder[i]].hasLayer())
                        continue;

                    org.deeplearning4j.nn.api.Layer layer = vertices[topologicalOrder[i]].getLayer();
                    String layerName = vertices[topologicalOrder[i]].getVertexName();
                    long range = layer.numParams();
                    if (range <= 0)
                        continue; //some layers have no params
                    if (editedVertices.contains(layerName))
                        continue; //keep the changed params
                    INDArray origParams = origGraph.getLayer(layerName).params();
                    layer.setParams(origParams.dup()); //copy over origGraph params
                }
            } else {
                newGraph.setParams(origGraph.params());
            }

    原來(lái)是直接調(diào)用 layer.setParams方法,給每一個(gè)層set相關(guān)的參數(shù)即可。接下來(lái),我們就有思路了,直接構(gòu)造一個(gè)和vgg16一樣的模型,把vgg16的參數(shù)set到新的模型里即可。其實(shí)本質(zhì)上,DeepLearning被train之后,有用的就是參數(shù)而已,有了這些參數(shù),我們就可以隨心所欲的用這些模型了。廢話(huà)不多說(shuō),我們直接上代碼,構(gòu)建我們目標(biāo)模型

private static ComputationGraph buildModel() {
        ComputationGraphConfiguration conf = new NeuralNetConfiguration.Builder().seed(123)
                .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).activation(Activation.RELU)
                .graphBuilder().addInputs("input1", "input2").addVertex("stack", new StackVertex(), "input1", "input2")
                .layer("conv1_1",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nIn(3).nOut(64)
                                .build(),
                        "stack")
                .layer("conv1_2",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(64).build(),
                        "conv1_1")
                .layer("pool1",
                        new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
                                .stride(2, 2).build(),
                        "conv1_2")
                // block 2
                .layer("conv2_1",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(128).build(),
                        "pool1")
                .layer("conv2_2",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(128).build(),
                        "conv2_1")
                .layer("pool2",
                        new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
                                .stride(2, 2).build(),
                        "conv2_2")
                // block 3
                .layer("conv3_1",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(256).build(),
                        "pool2")
                .layer("conv3_2",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(256).build(),
                        "conv3_1")
                .layer("conv3_3",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(256).build(),
                        "conv3_2")
                .layer("pool3",
                        new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
                                .stride(2, 2).build(),
                        "conv3_3")
                // block 4
                .layer("conv4_1",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
                        "pool3")
                .layer("conv4_2",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
                        "conv4_1")
                .layer("conv4_3",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
                        "conv4_2")
                .layer("pool4",
                        new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
                                .stride(2, 2).build(),
                        "conv4_3")
                // block 5
                .layer("conv5_1",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
                        "pool4")
                .layer("conv5_2",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
                        "conv5_1")
                .layer("conv5_3",
                        new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
                        "conv5_2")
                .layer("pool5",
                        new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
                                .stride(2, 2).build(),
                        "conv5_3")
                .addVertex("unStack1", new UnstackVertex(0, 2), "pool5")
                .addVertex("unStack2", new UnstackVertex(1, 2), "pool5")
                .addVertex("cosine", new CosineLambdaVertex(), "unStack1", "unStack2")
                .addLayer("out", new LossLayer.Builder().build(), "cosine").setOutputs("out")
                .setInputTypes(InputType.convolutionalFlat(224, 224, 3), InputType.convolutionalFlat(224, 224, 3))
                .build();
        ComputationGraph network = new ComputationGraph(conf);
        network.init();
        return network;
    }

    接下來(lái)讀取VGG16的參數(shù),set到我們的新模型里。為了代碼方便,我們將LayerName設(shè)定的和vgg16里一樣

String vggLayerNames = "conv1_1,conv1_2,conv2_1,conv2_2,conv3_1,conv3_2,conv3_3,conv4_1,conv4_2,conv4_3,conv5_1,conv5_2,conv5_3"; 
File vggfile = new File("F:/vgg16_dl4j_vggface_inference.v1.zip");
        ComputationGraph vggFace =
                ModelSerializer.restoreComputationGraph(vggfile);
        ComputationGraph model = buildModel();
        for (String name : vggLayerNames.split(",")) {
            model.getLayer(name).setParams(vggFace.getLayer(name).params().dup());
		}

    特征提取層構(gòu)造完畢,提取特征之后,我們要計(jì)算距離了,這里就需要用DL4J實(shí)現(xiàn)自定義層,DL4J提供的自動(dòng)微分可以非常方便的實(shí)現(xiàn)自定義層,這里我們選擇 SameDiffLambdaVertex,原因是這一層不需要任何參數(shù),僅僅計(jì)算cosine即可,代碼如下:

public class CosineLambdaVertex extends SameDiffLambdaVertex {

	@Override
	public SDVariable defineVertex(SameDiff sameDiff, VertexInputs inputs) {
		SDVariable input1 = inputs.getInput(0);
		SDVariable input2 = inputs.getInput(1);
		return sameDiff.expandDims(sameDiff.math.cosineSimilarity(input1, input2, 1, 2, 3), 1);
	}

	@Override
	public InputType getOutputType(int layerIndex, InputType... vertexInputs) throws InvalidInputTypeException {
		return InputType.feedForward(1);
	}
}

    說(shuō)明:計(jì)算cosine之后這里用expandDims將一維張量拓寬成二維,是為了在LFW數(shù)據(jù)集中驗(yàn)證模型的準(zhǔn)確性。

    DL4J也提供其他的自定層和自定義節(jié)點(diǎn)的實(shí)現(xiàn),一共有如下五種:

  1. Layers: standard single input, single output layers defined using SameDiff. To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffLayer

  2. Lambda layers: as above, but without any parameters. You only need to implement a single method for these! To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffLambdaLayer

  3. Graph vertices: multiple inputs, single output layers usable only in ComputationGraph. To implement: extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffVertex

  4. Lambda vertices: as above, but without any parameters. Again, you only need to implement a single method for these! To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffLambdaVertex

  5. Output layers: An output layer, for calculating scores/losses. Used as the final layer in a network. To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffOutputLayer

    案例地址:https://github.com/eclipse/deeplearning4j-examples/tree/master/samediff-examples

   說(shuō)明文檔:https://github.com/eclipse/deeplearning4j-examples/blob/master/samediff-examples/src/main/java/org/nd4j/examples/samediff/customizingdl4j/README.md

    接下來(lái),還有最后一個(gè)問(wèn)題,輸出層怎么定義?輸出層不需要任何參數(shù)和計(jì)算,僅僅將cosine結(jié)果輸出即可,dl4j中提供LossLayer天然滿(mǎn)足這種結(jié)構(gòu),沒(méi)有參數(shù),且激活函數(shù)為恒等函數(shù)IDENTITY。那么到此為止模型構(gòu)造完成,最終結(jié)構(gòu)如下:

=========================================================================================================
VertexName (VertexType)        nIn,nOut   TotalParams   ParamsShape                  Vertex Inputs       
=========================================================================================================
input1 (InputVertex)           -,-        -             -                            -                   
input2 (InputVertex)           -,-        -             -                            -                   
stack (StackVertex)            -,-        -             -                            [input1, input2]    
conv1_1 (ConvolutionLayer)     3,64       1,792         W:{64,3,3,3}, b:{1,64}       [stack]             
conv1_2 (ConvolutionLayer)     64,64      36,928        W:{64,64,3,3}, b:{1,64}      [conv1_1]           
pool1 (SubsamplingLayer)       -,-        0             -                            [conv1_2]           
conv2_1 (ConvolutionLayer)     64,128     73,856        W:{128,64,3,3}, b:{1,128}    [pool1]             
conv2_2 (ConvolutionLayer)     128,128    147,584       W:{128,128,3,3}, b:{1,128}   [conv2_1]           
pool2 (SubsamplingLayer)       -,-        0             -                            [conv2_2]           
conv3_1 (ConvolutionLayer)     128,256    295,168       W:{256,128,3,3}, b:{1,256}   [pool2]             
conv3_2 (ConvolutionLayer)     256,256    590,080       W:{256,256,3,3}, b:{1,256}   [conv3_1]           
conv3_3 (ConvolutionLayer)     256,256    590,080       W:{256,256,3,3}, b:{1,256}   [conv3_2]           
pool3 (SubsamplingLayer)       -,-        0             -                            [conv3_3]           
conv4_1 (ConvolutionLayer)     256,512    1,180,160     W:{512,256,3,3}, b:{1,512}   [pool3]             
conv4_2 (ConvolutionLayer)     512,512    2,359,808     W:{512,512,3,3}, b:{1,512}   [conv4_1]           
conv4_3 (ConvolutionLayer)     512,512    2,359,808     W:{512,512,3,3}, b:{1,512}   [conv4_2]           
pool4 (SubsamplingLayer)       -,-        0             -                            [conv4_3]           
conv5_1 (ConvolutionLayer)     512,512    2,359,808     W:{512,512,3,3}, b:{1,512}   [pool4]             
conv5_2 (ConvolutionLayer)     512,512    2,359,808     W:{512,512,3,3}, b:{1,512}   [conv5_1]           
conv5_3 (ConvolutionLayer)     512,512    2,359,808     W:{512,512,3,3}, b:{1,512}   [conv5_2]           
pool5 (SubsamplingLayer)       -,-        0             -                            [conv5_3]           
unStack1 (UnstackVertex)       -,-        -             -                            [pool5]             
unStack2 (UnstackVertex)       -,-        -             -                            [pool5]             
cosine (SameDiffGraphVertex)   -,-        -             -                            [unStack1, unStack2]
out (LossLayer)                -,-        0             -                            [cosine]            
---------------------------------------------------------------------------------------------------------
            Total Parameters:  14,714,688
        Trainable Parameters:  14,714,688
           Frozen Parameters:  0
=========================================================================================================

三、在LFW上驗(yàn)證模型準(zhǔn)確率

   LFW數(shù)據(jù)下載地址:http://vis-www.cs.umass.edu/lfw/,我下載之后放在了F:\facerecognition目錄下。

    構(gòu)造測(cè)試集,分別構(gòu)造正例和負(fù)例,將相同的人臉?lè)乓欢?,不同的人臉?lè)乓欢?,代碼如下:

import org.apache.commons.io.FileUtils;

import java.io.File;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Random;

public class DataTools {
    private static final String PARENT_PATH = "F:/facerecognition";

    public static void main(String[] args) throws IOException {
        File file = new File(PARENT_PATH + "/lfw");
        List<File> list = Arrays.asList(file.listFiles());
        for (int i = 0; i < list.size(); i++) {
            String name = list.get(i).getName();
            File[] faceFileArray = list.get(i).listFiles();
            if (null == faceFileArray) {
                continue;
            }
            //構(gòu)造正例
            if (faceFileArray.length > 1) {
                String positiveFilePath = PARENT_PATH + "/pairs/1/" + name;
                File positiveFileDir = new File(positiveFilePath);
                if (positiveFileDir.exists()) {
                    positiveFileDir.delete();
                }
                positiveFileDir.mkdir();
                FileUtils.copyFile(faceFileArray[0], new File(positiveFilePath + "/" + faceFileArray[0].getName()));
                FileUtils.copyFile(faceFileArray[1], new File(positiveFilePath + "/" + faceFileArray[1].getName()));
            }
            //構(gòu)造負(fù)例
            String negativeFilePath = PARENT_PATH + "/pairs/0/" + name;
            File negativeFileDir = new File(negativeFilePath);
            if (negativeFileDir.exists()) {
                negativeFileDir.delete();
            }
            negativeFileDir.mkdir();
            FileUtils.copyFile(faceFileArray[0], new File(negativeFilePath + "/" + faceFileArray[0].getName()));
            File[] differentFaceArray = list.get(randomInt(list.size(), i)).listFiles();
            int differentFaceIndex = randomInt(differentFaceArray.length, -1);
            FileUtils.copyFile(differentFaceArray[differentFaceIndex], new File(negativeFilePath + "/" + differentFaceArray[differentFaceIndex].getName()));
        }
    }

    public static int randomInt(int max, int target) {
        Random random = new Random();
        while (true) {
            int result = random.nextInt(max);
            if (result != target) {
                return result;
            }
        }
    }
}

    測(cè)試集構(gòu)造完成之后,構(gòu)造迭代器,迭代器中讀取圖片用了NativeImageLoader,在《如何利用deeplearning4j中datavec對(duì)圖像進(jìn)行處理》有相關(guān)介紹。

public class DataSetForEvaluation implements MultiDataSetIterator {
	private List<FacePair> facePairList;
	private int batchSize;
	private int totalBatches;
	private NativeImageLoader imageLoader;
	private int currentBatch = 0;

	public DataSetForEvaluation(List<FacePair> facePairList, int batchSize) {
		this.facePairList = facePairList;
		this.batchSize = batchSize;
		this.totalBatches = (int) Math.ceil((double) facePairList.size() / batchSize);
		this.imageLoader = new NativeImageLoader(224, 224, 3, new ResizeImageTransform(224, 224));
	}

	@Override
	public boolean hasNext() {
		return currentBatch < totalBatches;
	}

	@Override
	public MultiDataSet next() {
		return next(batchSize);
	}

	@Override
	public MultiDataSet next(int num) {
		int i = currentBatch * batchSize;
		int currentBatchSize = Math.min(batchSize, facePairList.size() - i);
		INDArray input1 = Nd4j.zeros(currentBatchSize, 3,224,224);
		INDArray input2 =  Nd4j.zeros(currentBatchSize, 3,224,224);
		INDArray label = Nd4j.zeros(currentBatchSize, 1);
		for (int j = 0; j < currentBatchSize; j++) {
			try {
				input1.put(new INDArrayIndex[]{NDArrayIndex.point(j),NDArrayIndex.all(),NDArrayIndex.all(),NDArrayIndex.all()}, imageLoader.asMatrix(facePairList.get(i).getList().get(0)).div(255));
				input2.put(new INDArrayIndex[]{NDArrayIndex.point(j),NDArrayIndex.all(),NDArrayIndex.all(),NDArrayIndex.all()},imageLoader.asMatrix(facePairList.get(i).getList().get(1)).div(255));
			} catch (Exception e) {
				e.printStackTrace();
			}
			label.putScalar((long) j, 0, facePairList.get(i).getLabel());
			++i;
		}
		System.out.println(currentBatch);
		++currentBatch;
		return new org.nd4j.linalg.dataset.MultiDataSet(new INDArray[] { input1, input2},
				new INDArray[] { label });
	}

	@Override
	public void setPreProcessor(MultiDataSetPreProcessor preProcessor) {

	}

	@Override
	public MultiDataSetPreProcessor getPreProcessor() {
		return null;
	}

	@Override
	public boolean resetSupported() {
		return true;
	}

	@Override
	public boolean asyncSupported() {
		return false;
	}

	@Override
	public void reset() {
		currentBatch = 0;
	}

}

    接下來(lái)可以評(píng)估模型的性能了,準(zhǔn)確率和精確率還湊合,但F1值有點(diǎn)低。

========================Evaluation Metrics========================
 # of classes:    2
 Accuracy:        0.8973
 Precision:       0.9119
 Recall:          0.6042
 F1 Score:        0.7268
Precision, recall & F1: reported for positive class (class 1 - "1") only


=========================Confusion Matrix=========================
    0    1
-----------
 5651   98 | 0 = 0
  665 1015 | 1 = 1

Confusion matrix format: Actual (rowClass) predicted as (columnClass) N times
==================================================================

四、用SpringBoot將模型封裝成服務(wù)

    模型保存之后,就是一堆死參數(shù),怎么變成線(xiàn)上的服務(wù)呢?人臉識(shí)別服務(wù)分為兩種1:1和1:N

    1、1:1應(yīng)用

    典型的1:1應(yīng)用如手機(jī)的人臉識(shí)別解鎖,釘釘?shù)娜四樧R(shí)別考勤,這種應(yīng)用比較簡(jiǎn)單,僅僅只需要張三是張三即可,運(yùn)算量很小。很容易實(shí)現(xiàn)

    2、1:N應(yīng)用

    典型的1:N應(yīng)用如公安機(jī)關(guān)的人臉找人,在不知道目標(biāo)人臉身份的前提下,從海量人臉庫(kù)中找到目標(biāo)人臉是誰(shuí)。當(dāng)人臉庫(kù)中數(shù)據(jù)量巨大的時(shí)候,計(jì)算是一個(gè)很大的問(wèn)題。

如果不要求結(jié)構(gòu)可以實(shí)時(shí)出來(lái),可以離線(xiàn)用Hadoop MapReduce或者Spark來(lái)計(jì)算一把,我們需要做的工作僅僅是封裝一個(gè)Hive UDF函數(shù)、或者M(jìn)apReduce jar,再或者是Spark RDD編程即可。

    但對(duì)于要求計(jì)算結(jié)果實(shí)時(shí)性,這個(gè)問(wèn)題不能轉(zhuǎn)化為一個(gè)索引問(wèn)題,所以需要設(shè)計(jì)一種計(jì)算框架,可以分布式的解決全局Max或者全局Top的問(wèn)題,大致結(jié)構(gòu)如下:

如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)

    藍(lán)色箭頭表示請(qǐng)求留向,綠色箭頭表示計(jì)算結(jié)果返回,圖中描述了一個(gè)客戶(hù)端請(qǐng)求打到了節(jié)點(diǎn)Node3上,由Node3轉(zhuǎn)發(fā)請(qǐng)求到其他Node,并行計(jì)算。當(dāng)然如果各個(gè)Node內(nèi)存夠大,可以將整個(gè)人臉庫(kù)的張量都預(yù)熱到內(nèi)存常駐,加快計(jì)算速度。

    當(dāng)然,本篇博客中并沒(méi)有實(shí)現(xiàn)并行計(jì)算框架,只實(shí)現(xiàn)了用springboot將模型包裝成服務(wù)。運(yùn)行FaceRecognitionApplication,訪問(wèn)http://localhost:8080/index,服務(wù)效果如下:

如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)

   小編的主要意圖是介紹如何把DL4J用于實(shí)戰(zhàn),包括pretrained模型參數(shù)的獲取、自定義層的實(shí)現(xiàn),自定義迭代器的實(shí)現(xiàn),用springboot包裝層服務(wù)等等。

    當(dāng)然一個(gè)人臉識(shí)別系統(tǒng)只有一個(gè)圖片embedding和求張量距離是不夠的,還應(yīng)該包括人臉矯正、抵御AI attack(后面的博客也會(huì)介紹如何用DL4J進(jìn)行 FGSM 攻擊)、人臉關(guān)鍵部位特征提取等等很多精細(xì)化的工作要做。當(dāng)然要把人臉識(shí)別做成一個(gè)通用SAAS服務(wù),也是有很多工作要做。

   要訓(xùn)練一個(gè)好的人臉識(shí)別模型,需要多種loss function的配合,如可以先用SoftMax做分類(lèi),再用Center Loss、Triple Loss做微調(diào),后續(xù)的博客中將介紹如何用DL4J實(shí)現(xiàn)Triple Loss,來(lái)訓(xùn)練人臉識(shí)別模型。

以上就是如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng),小編相信有部分知識(shí)點(diǎn)可能是我們?nèi)粘9ぷ鲿?huì)見(jiàn)到或用到的。希望你能通過(guò)這篇文章學(xué)到更多知識(shí)。更多詳情敬請(qǐng)關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道。

網(wǎng)站標(biāo)題:如何用DL4J構(gòu)建起一個(gè)人臉識(shí)別系統(tǒng)
本文路徑:http://muchs.cn/article44/pgdche.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供企業(yè)網(wǎng)站制作、網(wǎng)站設(shè)計(jì)網(wǎng)站策劃、網(wǎng)站維護(hù)網(wǎng)站內(nèi)鏈、外貿(mào)網(wǎng)站建設(shè)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶(hù)投稿、用戶(hù)轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話(huà):028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)

成都網(wǎng)站建設(shè)