Documente Academic
Documente Profesional
Documente Cultură
Searchthisworkspace
VIEW
EDIT
HiddenMarkovModel
lasteditedby PBworks5yearsago Pagehistory
Tojointhisworkspace,requestaccess.
Alreadyhaveanaccount?Login!
HiddenMarkovModel
LecturebyProfessorXiaoleShirleyLiu WikibyDaLinandAllenCheng 1. DiscreteMarkovChain i. MarkovChainExample1 ii. MarkovChainExample2 iii. MarkovChainExample3 2. HiddenMarkovModel 3. Problems 4. Problem1 5. Problem2 6. ViterbiAlgorithm 7. Problem3 8. AdditionalApplicationsofHiddenMarkovModels 9. AdditionalResources TheMarkovchainwasnamedinhonorofProfessorAndreiAMarkov,whofirstpublishedhisfindings in1906.MarkovwasbornonJune14,1856inRyazan,Russia,andstudiedmathematicsatthe UniversityofSt.Petersburgwherehelatertaughtasaprofessoruntilhisdeathin1922.Althoughhis workinmathematicsrangesfromdifferentialequationstoprobabilitytheory,hehasbecomebest knownforthisresearchonstochasticprocessesandtheMarkovchain. Onthispage,wefirstgiveabriefintroductiontoMarkovprocessesandthediscreteMarkovchain.We thenpresenttheHiddenMarkovmodel,andsolveitwiththreedifferentmethods:forward,backwards, andtheViterbialgorithm.Finally,wewillexamineandsolveaHiddenMarkovprobleminabiological context.
Navigator
BasicR
Files
options
SideBar
RecentActivity
MicroarrayClassification editedby MicroarrayClassification editedby MicroarrayClassification editedby MicroarrayClassification editedby MicroarrayClassification editedby MicroarrayClassification editedby MicroarrayClassification editedby Moreactivity...
DiscreteMarkovChain
1/12
23/09/13
MarkovChainExample1
SupposeDianahasatesttomorrow,andshe wantstoknowwhatistheprobabilityofreceivinga passinggrade.Inthe6previousexamsthis semester,shehaspassedthefirstthreeandfailed themostrecentthree.Hergeniusroommatehas alsocalculatedforherthetransitionprobabilities a(i,j)forherexamstates:
MarkovChainExample2
FranktheWeathermanscomputerhasmalfunctioned, soheneedstopredictthe7dayweatherforecastby hand.Tosimplifyhistask,hedecidestoreportonly3 differentstates,rain(R),cloudy(C),orsunny(S). Fromcollege,heremembersthatthetransition probabilitiesforthedifferentweatherstatesare:
2/12
23/09/13
q(1)=3,theprobabilitythatq(2)=3isagaina(3,3),andsoforth.Thisissolvedmathematicallybelow:
MarkovChainExample3
Everyyearatthecompanywidemeeting,aselect employeeishonoredwiththechanceatapromotion. Inordertogetthepromotion,theymustguess correctlyonthefollowinggame.TheCEOhastwo coin,onefairandtheotherbiased,andholdsonein eachhand.TheCEOhandstheemployeeacoinwith thefollowinginitialprobability:
Aftereighttosses,theCEOaskstheemployertoguesstheorderofthecoinsheorsheflipped(fairor biased).Knowingtheinitialprobabilities,andthesetransitionprobabilities:
DanielguessesFFBBFFFB.Whatistheprobabilitythathereceivesthepromotion? Solution: ThisisaMarkovchainwith2statesS(1)=fair,S(2)=biased.Thisquestionisverysimilartoexample 2,onlyinsteadofknowingforcertainwhattheinitialstateis,weneedtoincludeaninitialprobability. Fromtheinitialprobability,wecanseethattheprobabilityofbeinghandedafaircoinatthebeginning is0.6.Eachsubsequentprobabilityisjusttheprobabilityoftransitioningfromonestatetotheother. Intuitively,weareasking,whatistheprobabilityofbeinghandedafaircoininitially?Oncewehavethe faircoin,whatistheprobabilityoftossingthefaircoinagain?Andafterthesecondtosswiththefair coin,whatistheprobabilityoftossingabiasedcoin?Andsoforth.Thisissolvedmathematically below:
HiddenMarkovModel
3/12
23/09/13
Problems
Givenanobservationsequence,itbecomesofinteresttosolveseveralproblems.Thefinalgoalisto formulatethemostlikelystatesequencethatproducestheobservationsequence.
Problem1
First,giventheparametersandprobabilities,whatistheprobabilityofobservingtheobservation sequence?Additionally,whatistheprobabilitythat,atagivenobservationpoint,astateisobserved? Thefirstproblemisasimplecollectionofprobabilities.Supposethatweknoworformulateastate sequence.Theprobabilitythattheobservationsequencewillbeobservedwiththisstatesequenceis calculatedbythemultiplicationoftherelevantemissionprobabilityateachpoint:
Wewillthenneedtocalculatetheprobabilityofobservingthestatesequenceusingtransition probabilities:
4/12
23/09/13
Atthenextobservationpoint,theprobabilitythatafaircoinisusediscalculated.
5/12
23/09/13
Theprocedureisasfollows: 1.Multiplythepreviousprobabilitythatafaircoinwasusedbythetransitionprobabilityfromfaircointo faircoin. 2.Multiplythepreviousprobabilitythatabiasedcoinwasusedbythetransitionprobabilityfrombiased cointofaircoin. 3.Sumtheresultsof(1)and(2).Thisyieldstheprobabilityofobtainingafaircoinfromeitherprevious state. 4.Multiplytheresultof3withtheprobabilitythatthefaircoinwillyieldaTailonaflip.Thisgivesthe probabilitythatthefaircoinwasusedatthesecondpoint. Thesameprocedurecanbeusedtocalculatetheprobabilitythatabiasedcoinwasused: 1.Multiplythepreviousprobabilitythatafaircoinwasusedbythetransitionprobabilityfromfaircointo biasedcoin. 2.Multiplythepreviousprobabilitythatabiasedcoinwasusedbythetransitionprobabilityfrombiased cointobiasedcoin. 3.Sumtheresultsof(1)and(2).Thisyieldstheprobabilityofobtainingabiasedcoinfromeither previousstate. 4.Multiplytheresultof3withtheprobabilitythatthebiasedcoinwillyieldaTailonaflip.Thisgives theprobabilitythatthebiasedcoinwasusedatthesecondpoint.
Thiscanberepeatedateachobservationpointtoyieldstateprobabilities.
Thefinalcalculationisthesumofthefinalprobabilitiesofbothstates.Thisyieldstheprobabilitythat theobservationsequenceisobservedgiventheinitialprobabilitysequences.
6/12
23/09/13
Wewilldefinethecurrentpointasthelasttailintheobservationsequence.Thepreviouspointwillthus betheheadbeforeit.Then,tocalculatetheprobabilitythatthestateprecedingthecurrentoneisafair coin,weusethisprocedure: 1.MultiplytheprobabilitythatafaircoinwillflipaTailbytheprobabilitythatafaircoinwilltransition intoafaircoin. 2.Multiplytheresultof(1)bytheprobabilitythatthecurrentstateisafaircoin. 3.MultiplytheprobabilitythatabiasedcoinwillflipaTailbytheprobabilitythatafaircoinwill transitionintoabiasedcoin. 4.Multiplytheresultof(3)bytheprobabilitythatthecurrentstateisabiasedcoin. 5.Sum(2)and(4)toyieldtheprobabilitythatthepreviousstatewasafaircoin. Theprobabilitythatthepreviousstatewasabiasedcoincanbecalculatedsimilarly: 1.MultiplytheprobabilitythatafaircoinwillflipaTailbytheprobabilitythatabiasedcoinwill transitionintoafaircoin. 2.Multiplytheresultof(1)bytheprobabilitythatthecurrentstateisafaircoin. 3.MultiplytheprobabilitythatabiasedcoinwillflipaTailbytheprobabilitythatabiasedcoinwill transitionintoabiasedcoin. 4.Multiplytheresultof(3)bytheprobabilitythatthecurrentstateisabiasedcoin. 5.Sum(2)and(4)toyieldtheprobabilitythatthepreviousstatewasabiasedcoin.
7/12
23/09/13
Onceagain,thesetwocalculationscanbeperformedfortherestoftheobservationsequence.
Problem2
Thenextprobleminvolveschoosingthemostlikelystatesequencegiventheobservationsequence. Thiscanbedonewiththecalculationsfromtheforward/backwardprocedures.Byrunningthetwo proceduresseparatelyandstoringtheprobabilitiesateachpoint,theprobabilisticpredictioncanbe madeateverypoint.
Thismaximizestheexpectednumberofcorrectlypredictedstates.
ViterbiAlgorithm
TheViterbiAlgorithmisadynamicprogrammingalgorithmusedtofindthemostlikelysequenceof hiddenstates,ortheViterbipath.Similartotheforwardbackwardalgorithm,theViterbialgorithmuses dynamicprogrammingwhenthecurrentstate'sprobabilityiscalculatedusingtheresultsofprevious calculations. Thealgorithminitiatesbycalculatingtheinitialprobabilitiesofeachstate.Wecontinuewiththecoin flippingexample,sotherearetwostates:fairorbiasedcoin.Therecanbe,ofcourse,manymore states.
8/12
23/09/13
ThenewlyintroducedPsivariablewillallowustodeterminetheViterbipath,aswewillsoonsee.
Appliedtothisexampleandtobothstates,
9/12
23/09/13
Attheend,wecanobtainthecompletepsiseriesforbothstates:
Theterminationproceedsasfollows:
Thus,thestatesequenceFFFFwillbemostlikelytoproducetheobservationsequenceHTTH. Morecomplicatedsystemscanbeanalyzed,ofcourse:
10/12
23/09/13
Morecomplicatedsystemscanbeanalyzed,ofcourse:
AssumingthatstateBproducesthehighestprobabilityinthelastobservation,wecantraceaViterbi sequenceofBDDDAB.
Problem3
ThelastbasicproblemforaHiddenMarkovModelinvolvesestimatingalltheparameters(transition andemissionprobabilities,initialprobabilities,etc.)soastomaximizetheprobabilityofanobservation sequencegiventhoseparameters. AmethodofestimatingtheoptimalparametersiswiththeBaumWelchalgorithm,whichisbasically anexpectationmaximizationalgorithm.Beginbyrandomlyinitializingtheparameters,runtheViterbi algorithmbasedontherandomparametersandtheobservationsequence,andupdatetheparameters toachieveabetterscore.
AdditionalApplicationsofHiddenMarkovModels
StockMarketpredictsBear/BullMarketstate(hidden)basedonobservedstockfluctuations. SpeechRecognitionpredictssentencesandwords(hidden)basedonobserved/perceivedsounds. DigitalSoundProcessingPredictssourcesignal(hidden)basedonobservedsignalfluctuations BioinformaticsMultitudeofapplicationsincludingsequencemotiffinding,gene prediction,genomecopynumberchange,proteinstructureprediction,proteinDNAinteraction prediction
AdditionalResources
Readings Eddy,S.R.HiddenMarkovModels.CurrentOpinioninStructuralBiology6:361365.1996 Link Eddy,S.R.WhatisaHiddenMarkovModel.NatureBiotechnology22:13151316.2004 HarvardLink Krogh,A.AnIntroductiontoHiddenMarkovModelsforBiologicalSequences.ComputationalMethods inMolecularBiology.EditedbyS.LSalzberg,D.B.Searls,andS.Kasif.Elsevier.1998 Link Rabiner,L.R.ATutorialonHiddenMarkovModelsandSelectedApplicationsinSpeechRecognition. ProceedingsoftheIEEE,77(2),p.257286,February1989. Link WaiKiChing,MichaelK.Ng.MarkovChains:Models,AlgorithmsandApplications.SpringerVerlag NewYork.2005 Links
11/12
23/09/13
WikipediaonHMMLink HMMERUpdatedLink UnderstandingHMMfromamathematicalperspectiveLink References Stat115Lecture(XiaoleShirleyLiu)HarvardLink CartoonimagesfromYahooImages
Comments(0) Youdon'thavepermissiontocommentonthispage.
Printableversion
PBworks/Help Termsofuse/Privacypolicy Aboutthisworkspace Contacttheowner/RSSfeed/Thisworkspaceispublic
12/12