实例介绍
用python写的人脸识别程序,主要是特征脸-主成分分析PCA算法pdf介绍文档,写的很详细
angcs in the light conditions(center light, left light, right light ), facial expressions(happy, normal,sad, sleepy, surprised, wink) and glasses(glasses, no-glasses) The original images are not cropped or aligned. Ive prepared a Python script available in that does t Extended Yale Facedatabase B The Extended Yale Facedatabase B contains 2414 images of J8 difterent people in its cropped version. The focus is on extracting features that are robust to illumination, the images have almost no variation in emotion/occlusion/..I that this dataset is too large for the experiments i perform in this document, you better use the at&T Facedatabase. A first version of the Yale Facedatabase B was used in 3 to see how the Eigenfaces and Fishcrfaccs mcthod(scction 2.3) pcrform undcr heavy illumination changes 10] used the same setup to take 16128 images of 28 people. The Extended Yale Facedatabase B is t he merge of the two databases, which is now known as extended Yalefacedata base b The face images need to be stored in a folder hierachy similar to <datbase name>/<subject name>/<filename > <ext>. The AT&T Facedatabase for example comes in such a hierarchy, see Listing I Listing 1 philippCmango: /facerec/data/ats tree README --10.pgm 10 40 pgn --10 2.1.1 Reading the images with Python Thc function in Listing 2 can be uscd to rcad in the imagos for cach subfolder of a given directory Each directory is given a unique (integer) label, you probably want to store the folder name as well The function returns the images and the corresponding classes. This function is really basic and there's much to enhance, but it does its job Listing 2: src/py/tinyfacerec/util. py def read-images(path, sz=None) 0 x,y=[],[] for dirname, dirname, filenames in os.walk(path) for subdirname in dirname subject-path os path. join(dirname, subdirname istdir(subject_path) try open Cos path. join (subject_path, filename) im im convert("L") resize to given size (if given) if (sz i t none) ANTIALIAS) X append(np. asarray(im, dtype=np, uint8)) except IoError print "I/0 error (o):[]"format(errno, strerror) except print "Unexpected error: " sy3.exc-info()[] =c+1 return [X, y 2.2 Eigenfaces The problem with the image representation we are given is its high dimensionality. Iwo-dimensional pxg grayscale images span a m= pq-dimensional vector space, so an image with 100 x 100 pixels lies in a 10, 000-dimensional image space already That's way too much for any computations, but are all dimensions really useful for us? We can only make a decision if there's any variance in data, so what wc arc looking for arc the componcnts that account for most of the information. The Principal Componcnt Analysis(PCA)was independently proposcd by Karl Pcarson(1901)and Harold Hotclling 1933) to turn a set of possibly correlated variables into a smaller set of uncorrelated variables. The idea is that a high-dimensional dataset is often described by correlated variables and therefore only a few meaningful dimensions account for most of the information. The PCa method finds the directions with the greatest variance in the data. called prinicipal coInpollents 2.2.1 Algorithmic Description Let X=a1, 2, .., mn be a randon vector with observations ;E Rd 1. Compute the mean A Compute thethe Covariance Matrix s ∑(x:-1)(x:-1) =1 3. Compute the eigenvalues Ai and eigenvectors v: of s Su1=A,2,i=1,2 (3) 4. Ordcr the cigcnvcctors descending by thcir cigcnvaluc. The k principal componcnts arc the eigenvectors corresponding to the k largest eigenvalues The k principal coInponents of the observed vector s are then given by y=W( c-u where W=(01, 22, .. 7k ). The reconstruction from the PC A basis is given by wy+ The Eigenfaces method then performs face recognition by 1. Projecting all training sainples into the PCA subspace (using EquatiOn 4) 2. Projecting the query imagc into the PCA subspacc(using Listing 5) 3. Finding the nearest Neighbor betweell the projected training inages and the projected query image Still there's one problem left. to solve. Imagine we are given 400 images sized 100x 100 pixel. The Principal Component Analysis solves the covariance matrix =XX, where size(X)=10000 X 400 in our example. You would end up with a 10000 x 10000 matrix, roughly 0.8GB. Solving this problem isnt feasible, so we'lI need to apply a trick. From your linear algebra lessons you know that a M x N matrix with M>N can only have N-1 non-zero eigenvalues. So it's possible to take the eigenvalue decomposition S=XX of size NeN instead XX2=入 and get the original eigenvectors of S=XX with a left multiplication of the data matrix XX(Xvi)=Ai(Xvi) (7) The resulting eigenvectors are orthogonal, to get orthonormal eigenvectors they need to be normalized to unit length. I dont want to turn this into a publication, so please look into 7 for the derivatiON and proof of the equations 2.2.2 Eigenfaces in Python We've already seen, that the eigenfaces and Fisherfaces method expect a data matrix with observations by row(or column if you prefer it). Listing 3 defines two functions to reshape a list of multi-dimensional data into a data matrix. Note, that all samples are assumed to be of equal size Listing 3: src/py/tinyfaccrcc/utiL. p def asRowMatrix(X) if len(X 0 return np array([] mat np empty((o, X[o] size), dtype=x [o] dtype) for row in mat = np vstack((mat, np. asarray(row). reshape(1,-1))) return mat def asColumnMatrix(X) if len(X==0 ([]) mat np empty((X[o]. size, 0), dtype=X[o]. dtype) for col in x p hstack((mat, np. asarray(col).reshape(-1: 1))) return mat Translating the PCa iron the algorithImic description of section 2.2.1 to Python is almost trivial Dont copy and paste from this document, the source code is available in folder src/py/tinyfacerec Listing 4 implements the Principal Componcnt Analysis givcn by Equation 1, 2 and 3. It also implements the inner-product PCa formulation, which occurs if there are more dimensions than samples. You can shorten this code, I just wanted to point out how it works Listing 4: src/py/ tinyfacerec/subspace. py def pca(x, y, num_components=0) [n,d]= x shape f (num_components <=0)or (num_components >n) num_ components u =x. mean Caxis=o) mu c= np. dot(XT, x) Eigenvalues, eigenvectors]= nplinalg. eigh(c) else np. dot(X, X T) Eigenvalues, eigenvectors] =np linalg. eigh(C genvect p dot(XT, eig for i i eigenvectors [:,i] =eigenvectors[:, i]/np. linalg. norm(eigenvectors[:,i]) t o decompositi eigenvectors, eigenvalues, variance = np linalg. svd(xT, full_matrices=False) sort eigenvectors descending by their eigenvalue dx = np. argsort(eigenvalues) eigenvalues eigenvalues [idx] eigenvectors eigenvectors[:, idx] select only n eigenvalues eigenvalues [0: num_components]. copy () genvectc eigenvectors[:, 0: num-components] copy() return eigenvalues, eigenvectors, mu] Thc obscrvations arc givcn by row, so the projcction in Equation 4 nccds to bc rearranged a little isting 5: src/py/ tinyfacerec/subspace. py def project(W, x, nu=None) return np. dot(x, W) return np. dot(X - mu, w The samc applies to the reconstruction in Equation 5 Listing 6: src/py/tinyfacerec/subspace.py def reconstruct(W,Y,mu=None) if mu is none return np. dot(y,W T) return np. dot(Y,WT)+mu Now that everything is defined it's time for the fun stuff. The face images are read with Listing 2 and then a full PCA (see Listing 4)is performed. I'll use the great matplotlib library for plotting in Python, please install it if you haven't done already Listing 7: src/py /scripts/example-pca py import sys append tinyfacerec to module search path sys. path append(l t import numpy and matplotlib colormaps import numpy as np import tinyfacerec modules from tinyfacerec subspace import pca from tinyfacerec util import. normalize, asRowMatrix, read- images from tinyfacerec visual import subplot IX, y] = read-images("/home/philipp/facerec/data/at") perform a full pca [D, W, mu]= pca (asRowMatrix(X), y) Thats it alrcady. Pretty casy, no? Each principal componcnt has the samc length as the original image, thus it can be displayed as an image. [13] referred to these ghostly looking faces as Eigenfaces chats where the Eigenfaces method got its name from. We'll now want to look at the Eigenfaces but first of all we need a method to turn the data into a representation matplotlib understand The eigenvectors we llave calculated can conltaill negative values. but the iImage data is excepted as unsigned integer values in the range of 0 to 255. So we need a function to normalize the data first (Listing 8): Listing 8: src/py/tinyfacerec/util. py def normalize(X, low, high, dtype=None) np. asarray(x) inx maxX mp.min(x) ma文 #xx#xxi x - float(minx) x/float(( K cale to [l high] X *(high-low) f dtype is N return np. asarray(x, dtype=dtype) In Python we'll then define a subplot method(see src/py/tinyfacerec/visual. py) to simplify the plotting. The method takes a list of images, a title, color scale and finally generates a subplot. Listing 9: src/py/tinyfacerec/visual. py import numpy as np import matplotlib. pyplot as plt import matplotlibcm ascm def create font(fontname=,Tahoma,, fontsize=10) return i ,fontname,: fontname, fontsize: fontsize j def subplot(title, images, rows, cols, sptitle="subplot", sptitles=[], colormap=cm isible=T fil fig plt. figure() g text(.5,.95, title, horizontalalignment=center,) for i in xrange (len(images)) axo fig,add-subplot (rows, cols,(i+1)) plt setp(axo get_xticklabels(, visible=false) icklabels(), visi if len(sptitles) len(i e s plt title("7 l (sptitle, str(sptitles [i])), create_font( Tahoma, 10)) else plt title("%s #ld %o (aptitle, (i+1) create-font( Tahoma 10)) plt. imshow(np. asarray(images [i]), cmap=colormap if filename is none plt.show else fig. savefig(filename This simplified the Pyt hon script, in Listing 10 to Listing 10: src/py/scripts/example pca. py import matplotlib cm ascm turn the first (at most)16 eigenvectors into grayscale images (note: eigenvectors are stored by column E for i in xrange(min(len(X), 16)) e =W[:, i]. reshape(x[o], shape E append (normalize(e,0, 255)) plot them and store the plot to " python-eigenfaces pdf subplot(title="Eigenfaces aT&T Facedatabase", images=E, rows=4, cols=4, sptitle=" Eigenface", colormap=cm. jet, filename=python__eigenfaces. png") I've used the jet colormap, so you can see how the grayscale values are distributed within the spe cific Eigenfaces. You can see, that the Eigenfaces do not only encode facial features, but also the illumination in thc images(scc thc loft light in Eigcnfacc#4, right light in Eigenfaces #5) Eigenfaces AT&T Facedatabase Eigenface #1 Eigenface #2 Eigenface #43 Eigenface #4 Eigenface t Eigenface t Eigenface h Eigenface +8 Eigenface #10 E eigenface enlace芹 Wc'vc alrcady scen in Equation 5, that we can reconstruct a facc from its lowcr dimcnsional approxi- mation. So let's see how many Eigenfaces are needed for a good reconstruction. i'll do a subplot with 10.30.....310 Eigenfaces Listing 11: src/py/scripts/example-pca py from tinyfacerec subspace import project, reconstruct f recons tion steps steps=[i i in xrange(10, min(len(X), 320),20)] for i in xrange(min(len(steps), 16)) mume吕 teps [i] P project(W[:,0: numEvsl, X[]. reshape(1,-1), mu) R reconstruct(w[:, 0: numEvs] reshape and append to plots R reshape(X[o]. shape) E append(normalize(R,0, 255)) t plot them and store the plot to "python-reconstruction pdf subplot(title="Reconstruction AT&T Facedatabase", images=e, rows=4, cols=1, sptitle=" Eigenvectors, sptitles=steps, colormap=cm. gray, filename python-pca_reconstruction. png") 10 Eigenvectors are obviously not sufficient for a good image reconstruction, 50 Eigenvectors may already be sufficient to encode important facial features. You'll get a good reconstruction with ap proximately 300 Eigenvectors for the AT&T Facedatabase. There are rule of thumbs how many Eigenfaces you should choose for a successful face recognitioN, but it heavily depends on the input data. 15 is the perfect point to start researching for this Reconstruction atst facedatabase Eigenvectors #l Eigenvectors #30 Eigenvectors #50 Eigenvectors #70 Eigenvectors #90 Eigenvectors #110 Eigenvectors #130 genyectors #100 Eigenvectors #100 ors #210 Eigenvectors Eigenvectors #25u Eigenvectors #f270 Eigenvectors #290 Eigenvectors #3l0 Now we have got everything to implement the Eigenfaces method. Python is object oriented and so is our Eigenfaces Inodel. Let's recap: The Eigenfaces Method is basically a Pricipal Coinponent allalysis with a Nearest Neighbor model. Some publications report about the influence of the distance metric (I can't support these claims with my research), so various distance metrics for the Nearest Neighbor should bc supported. Listing 12 dcfincs an AbstractDistance as the abstract basc class for cach distance metric. Every subclass overrides the call operator__call_- as shown for the Euclidean Distance and the Negated Cosine Distance. If you need more distance metrics, please have a look at the distance metricsimplementedinhttps://www.github.com/bytefish/facerec Listing 12: src/py/tinyfacerec/distance. py import numpy as np class AbstractDistance (object): def name self. name def ll__(self, p, q) raise NotImplementedError("Every AbstractDistance must implement the -call @property def name (self) return self def -_-(self) return self class EuclideanDistance (Abstractdistance) def AbstractDistance.init (self,"EuclideanDistance") def --_-(self, p, g) P =np. asarray(p).flatten q np. asarray(a). flatten O return np sqrt(np sum(np. power((p-q), 2))) class CosineDistance (abstractDistance) def init (self) AbstractDistance.init(self, "CosineDistance " de王ca11_(se1f,p,q) P= np. asarray(p).flatten( q= np. asarray(q).flatten O return -np, dot(p. T: q)/(np sqrt(np. dot(p, p. T)*np. dot(a, a. T))) The Eigenfaces and Fishcrfaccs mcthod both sharc common mcthods, so wo 'll dcfinc a basc prediction modcl in Listing 13. I dont want to do a full k-Ncarcst Ncighbor implementation hcrc, bccausc(1) However, feel free to extend these basic classes for your neef Cachs1f e the number of neighbors doesn't really matter for both methods and(2)it would confuse people. If you are implementing it in a language of your choice, you should separate the feature extraction and classification from the model itself. A real generic app is given in my facerec framework Listing 13: src/py/ tinyfacerec/model. py import numpy as np from sub import pca, lda, fisherf proJect from distance import EuclideanDistance class BaseModel(object): def --init--(self, X=None, y=None, dist-metric=EuclideanDistance(, num_components self, dist metric dist metric self. num components =0 [] self. W= self. mu = [ if (x is not None)and (y is not. None self compute(, y) te(self, x, y) raise NotImplementedError("Every BaseModel must implement the compute method. " def predict(self, X) minDist np finfo(,float,).max inClass =-1 Q= project(self. W, x reshape(1,-1),self.mu for i in xrange(len(self projections )) dist self dist_metric(self projections f dist<Ⅱ insist mindist dist minclass selfyIi return minclass Listing 20 then subclasses the EigenfacesModel from the BaseModel, so only the compute method needs to be overriden with our specific feature extraction. The predictiOn is a 1-Nearest Neighbor search with a distance metric Listing 14: src/py/tinyfacerec/model. py class EigenfacesModel(BaseModel): def --init--(self, X-None, y=None, dist_metric=EuclideanDistanceO, num_ components super (eigenfacesModel, self it__(X=X, y=y, dist_metric=dist_metric, mum_components=num_components) def compute(self,x, y) [D, self. W, Belf mu] pca(asRoWMatrix(X),y, self. num_components) store label self.y=y store projections for xi in X self projections append (project(self,w, xi. reshape(1,-1), self. mu)) Now that the Eigenf aces Model is defined, it can be used to learn the Figenfaces and generate predictions In the following Listing 15 we'll load the Yale Facedatabase A and perform a prediction on the first Listing 15: src/py/scripts/exaple model_eigenfaces. py t append tinyfacerec to module search path sys. path append(".") t import numpy and matplotlib colormaps lmport numpy as np t import tinyfacerec modules from tinyfacerec util import read_images from tinyfacerec. model import EigenfacesModel LX, y] -read-images("/home/philipp/facerec/data/yalefacesrecognition") model EigenfacesModel(X[1: J get a prediction for the first observation print "expected =",y[o], "/", "predicted =" model, predict(X [o]) 2.3 Fisherfaces The Linear discriminant Analysis was invented by the great statistician Sir R. A. Fisher, who success fully used it for classifying fowers in his 1936 paper The ise of multiple m.. rements in, ta conomic problems s. but why do we need another dimensionality reduction method, if the Principal Compo- nent Analysis(PCA) did such a good job The Pca finds a linear combination of features that illaxiInizes the total variance in data. While this is clearly a powerful way to represuccsent data, it doesn't consider any classes and so a lot of discriminative information may be lost when throwing components away. Imagine a situation where the variance is generated by an external source, let it be the light The components identified by a PCA do not necessarily contain any discriminative information at all. so the projected samples are smeared together and a classification becomes impossible In order to find the combination of features that separates best between classes the Linear Discriminant Analysis maximizes the ratio of between-classes to within-classes scatter. The idea is simple: same classes should cluster tightly together, while different classes are as far away as possible from each other. This was also recognized by Belhumeur, Hespanha and Kriegman and so they applied a Discriminant Analysis to facc rccognition in 3 2.3.1 Algorithmic Description Let x be a random vector with samples drawn from c classes X The scatter matrices SB and Sw are calculated as 10 【实例截图】
【核心代码】
标签:
小贴士
感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。
- 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
- 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
- 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
- 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。
关于好例子网
本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明
网友评论
我要评论