人脸识别文献翻译中英双文Word下载.docx

上传人:b****4 文档编号:6286472 上传时间:2023-05-06 格式:DOCX 页数:12 大小:99.60KB
下载 相关 举报
人脸识别文献翻译中英双文Word下载.docx_第1页
第1页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第2页
第2页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第3页
第3页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第4页
第4页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第5页
第5页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第6页
第6页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第7页
第7页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第8页
第8页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第9页
第9页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第10页
第10页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第11页
第11页 / 共12页
人脸识别文献翻译中英双文Word下载.docx_第12页
第12页 / 共12页
亲,该文档总共12页,全部预览完了,如果喜欢就下载吧!
下载资源
资源描述

人脸识别文献翻译中英双文Word下载.docx

《人脸识别文献翻译中英双文Word下载.docx》由会员分享,可在线阅读,更多相关《人脸识别文献翻译中英双文Word下载.docx(12页珍藏版)》请在冰点文库上搜索。

人脸识别文献翻译中英双文Word下载.docx

4.1FeatureLocalization

Beforediscussingthemethodsofcomparingtwofacialimageswenowtakeabrieflookatsomeatthepreliminaryprocessesoffacialfeaturealignment.Thisprocesstypicallyconsistsoftwostages:

facedetectionandeyelocalization.Dependingontheapplication,ifthepositionofthefacewithintheimageisknownbeforehand(foracooperativesubjectinadooraccesssystemforexample)thenthefacedetectionstagecanoftenbeskipped,astheregionofinterestisalreadyknown.Therefore,wediscusseyelocalizationhere,withabriefdiscussionoffacedetectionintheliteraturereview.

Theeyelocalizationmethodisusedtoalignthe2Dfaceimagesofthevarioustestsetsusedthroughoutthissection.However,toensurethatallresultspresentedarerepresentativeofthefacerecognitionaccuracyandnotaproductoftheperformanceoftheeyelocalizationroutine,allimagealignmentsaremanuallycheckedandanyerrorscorrected,priortotestingandevaluation.

Wedetectthepositionoftheeyeswithinanimageusingasimpletemplatebasedmethod.Atrainingsetofmanuallypre-alignedimagesoffacesistaken,andeachimagecroppedtoanareaaroundbotheyes.Theaverageimageiscalculatedandusedasatemplate.

Figure4-1Theaverageeyes.Usedasatemplateforeyedetection.

Botheyesareincludedinasingletemplate,ratherthanindividuallysearchingforeacheyeinturn,asthecharacteristicsymmetryoftheeyeseithersideofthenose,provideausefulfeaturethathelpsdistinguishbetweentheeyesandotherfalsepositivesthatmaybepickedupinthebackground.Althoughthismethodishighlysusceptibletoscale(i.e.subjectdistancefromthecamera)andalsointroducestheassumptionthateyesintheimageappearnearhorizontal.Somepreliminaryexperimentationalsorevealsthatitisadvantageoustoincludetheareaofskinjustbeneaththeeyes.Thereasonbeingthatinsomecasestheeyebrowscancloselymatchthetemplate,particularlyifthereareshadowsintheeye-sockets,buttheareaofskinbelowtheeyeshelpstodistinguishtheeyesfromeyebrows(theareajustbelowtheeyebrowscontaineyes,whereastheareabelowtheeyescontainsonlyplainskin).

Awindowispassedoverthetestimagesandtheabsolutedifferencetakentothatoftheaverageeyeimageshownabove.Theareaoftheimagewiththelowestdifferenceistakenastheregionofinterestcontainingtheeyes.Applyingthesameprocedureusingasmallertemplateoftheindividualleftandrighteyesthenrefineseacheyeposition.

Thisbasictemplate-basedmethodofeyelocalization,althoughprovidingfairlypreciselocalizations,oftenfailstolocatetheeyescompletely.However,weareabletoimproveperformancebyincludingaweightingscheme.

Eyelocalizationisperformedonthesetoftrainingimages,whichisthenseparatedintotwosets:

thoseinwhicheyedetectionwassuccessful;

andthoseinwhicheyedetectionfailed.Takingthesetofsuccessfullocalizationswecomputetheaveragedistancefromtheeyetemplate(Figure4-2top).Notethattheimageisquitedark,indicatingthatthedetectedeyescorrelatecloselytotheeyetemplate,aswewouldexpect.However,brightpointsdooccurnearthewhitesoftheeye,suggestingthatthisareaisofteninconsistent,varyinggreatlyfromtheaverageeyetemplate.

Figure4-2–Distancetotheeyetemplateforsuccessfuldetections(top)indicatingvarianceduetonoiseandfaileddetections(bottom)showingcrediblevarianceduetomiss-detectedfeatures.

Inthelowerimage(Figure4-2bottom),wehavetakenthesetoffailedlocalizations(imagesoftheforehead,nose,cheeks,backgroundetc.falselydetectedbythelocalizationroutine)andonceagaincomputedtheaveragedistancefromtheeyetemplate.Thebrightpupilssurroundedbydarkerareasindicatethatafailedmatchisoftenduetothehighcorrelationofthenoseandcheekboneregionsoverwhelmingthepoorlycorrelatedpupils.Wantingtoemphasizethedifferenceofthepupilregionsforthesefailedmatchesandminimizethevarianceofthewhitesoftheeyesforsuccessfulmatches,wedividethelowerimagevaluesbytheupperimagetoproduceaweightsvectorasshowninFigure4-3.Whenappliedtothedifferenceimagebeforesummingatotalerror,thisweightingschemeprovidesamuchimproveddetectionrate.

Figure4-3-Eyetemplateweightsusedtogivehigherprioritytothosepixelsthatbestrepresenttheeyes.

4.2TheDirectCorrelationApproach

Webeginourinvestigationintofacerecognitionwithperhapsthesimplestapproach,knownasthedirectcorrelationmethod(alsoreferredtoastemplatematchingbyBrunelliandPoggio)involvingthedirectcomparisonofpixelintensityvaluestakenfromfacialimages.Weusetheterm‘DirectCorrelation’toencompassalltechniquesinwhichfaceimagesarecompareddirectly,withoutanyformofimagespaceanalysis,weightingschemesorfeatureextraction,regardlessofthedistancemetricused.Therefore,wedonotinferthatPearson’scorrelationisappliedasthesimilarityfunction(althoughsuchanapproachwouldobviouslycomeunderourdefinitionofdirectcorrelation).WetypicallyusetheEuclideandistanceasourmetricintheseinvestigations(inverselyrelatedtoPearson’scorrelationandcanbeconsideredasascaleandtranslationsensitiveformofimagecorrelation),asthispersistswiththecontrastmadebetweenimagespaceandsubspaceapproachesinlatersections.

Firstly,allfacialimagesmustbealignedsuchthattheeyecentersarelocatedattwospecifiedpixelcoordinatesandtheimagecroppedtoremoveanybackgroundinformation.Theseimagesarestoredasgrayscalebitmapsof65by82pixelsandpriortorecognitionconvertedintoavectorof5330elements(eachelementcontainingthecorrespondingpixelintensityvalue).Eachcorrespondingvectorcanbethoughtofasdescribingapointwithina5330dimensionalimagespace.Thissimpleprinciplecaneasilybeextendedtomuchlargerimages:

a256by256pixelimageoccupiesasinglepointin65,536-dimensionalimagespaceandagain,similarimagesoccupyclosepointswithinthatspace.Likewise,similarfacesarelocatedclosetogetherwithintheimagespace,whiledissimilarfacesarespacedfarapart.CalculatingtheEuclideandistanced,betweentwofacialimagevectors(oftenreferredtoasthequeryimageq,andgalleryimageg),wegetanindicationofsimilarity.Athresholdisthenappliedtomakethefinalverificationdecision.

4.2.1VerificationTests

Theprimaryconcerninanyfacerecognitionsystemisitsabilitytocorrectlyverifyaclaimedidentityordetermineaperson'

smostlikelyidentityfromasetofpotentialmatchesinadatabase.Inordertoassessagivensystem’sabilitytoperformthesetasks,avarietyofevaluationmethodologieshavearisen.Someoftheseanalysismethodssimulateaspecificmodeofoperation(i.e.securesiteaccessorsurveillance),whileothersprovideamoremathematicaldescriptionofdatadistributioninsomeclassificationspace.Inaddition,theresultsgeneratedfromeachanalysismethodmaybepresentedinavarietyofformats.Throughouttheexperimentationsinthisthesis,weprimarilyusetheverificationtestasourmethodofanalysisandcomparison,althoughwealsouseFisher’sLinearDiscriminatetoanalyzeindividualsubspacecomponentsinsection7andtheidentificationtestforthefinalevaluationsdescribedinsection8.Theverificationtestmeasuresasystem’sabilitytocorrectlyacceptorrejecttheproposedidentityofanindividual.Atafunctionallevel,thisreducestotwoimagesbeingpresentedforcomparison,forwhichthesystemmustreturneitheranacceptance(thetwoimagesareofthesameperson)orrejection(thetwoimagesareofdifferentpeople).Thetestisdesignedtosimulatetheapplicationareaofsecuresiteaccess.Inthisscenario,asubjectwillpresentsomeformofidentificationatapointofentry,perhapsasaswipecard,proximitychiporPINnumber.Thisnumberisthenusedtoretrieveastoredimagefromadatabaseofknownsubjects(oftenreferredtoasthetargetorgalleryimage)andcomparedwithaliveimagecapturedatthepointofentry(thequeryimage).Accessisthengranteddependingontheacceptance/rejectiondecision.

Theresultsofthetestarecalculatedaccordingtohowmanytimestheaccept/rejectdecisionismadecorrectly.Inordertoexecutethistestwemustfirstdefineourtestsetoffaceimages.Althoughthenumberofimagesinthetestsetdoesnotaffecttheresultsproduced(astheerrorratesarespecifiedaspercentagesofimagecomparisons),itisimportanttoensurethatthetestsetissufficientlylargesuchthatstatisticalanomaliesbecomeinsignificant(forexample,acoupleofbadlyalignedimagesmatchingwell).Also,thetypeofimages(highvariationinlighting,partialocclusionsetc.)willsignificantlyaltertheresultsofthetest.Therefore,inordertocomparemultiplefacerecognitionsystems,theymustbeappliedtothesametestset.

However,itshouldalsobenotedthatiftheresultsaretoberepresentativeofsystemperformanceinarealworldsituation,thenthetestdatashouldbecapturedunderpreciselythesamecircumstancesasintheapplicationenvironment.Ontheotherhand,ifthepurposeoftheexperimentationistoevaluateandimproveamethodoffacerecognition,whichmaybeappliedtoarangeofapplicationenvironments,thenthetestdatashouldpresenttherangeofdifficultiesthataretobeovercome.Thismaymeanincludingagreaterpercentageof‘difficult’imagesthanwouldbeexpectedintheperceivedoperatingconditionsandhencehighererrorratesintheresults

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 解决方案 > 学习计划

copyright@ 2008-2023 冰点文库 网站版权所有

经营许可证编号:鄂ICP备19020893号-2