• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>

            O(1) 的小樂

            Job Hunting

            公告

            記錄我的生活和工作。。。
            <2025年7月>
            293012345
            6789101112
            13141516171819
            20212223242526
            272829303112
            3456789

            統(tǒng)計(jì)

            • 隨筆 - 182
            • 文章 - 1
            • 評論 - 41
            • 引用 - 0

            留言簿(10)

            隨筆分類(70)

            隨筆檔案(182)

            文章檔案(1)

            如影隨形

            搜索

            •  

            最新隨筆

            最新評論

            閱讀排行榜

            評論排行榜

            Expectation-maximization algorithm EM算法

                 In statistics, an expectation-maximization (EM) algorithm is a method for finding maximum likelihood ormaximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. EM is an iterative method which alternates between performing an expectation (E) step, which computes the expectation of the log-likelihood evaluated using the current estimate for the latent variables, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

             

              EM算法可用于很多問題的框架,其中需要估計(jì)一組描述概率分布的參數(shù)\boldsymbol\theta,只給定了由此產(chǎn)生的全部數(shù)據(jù)中能觀察到的一部分!

              EM算法是一種迭代算法,它由基本的兩個(gè)步驟組成:

              E step:估計(jì)期望步驟

              使用對隱變量的現(xiàn)有估計(jì)來計(jì)算log極大似然

              M step: 最大化期望步驟

              計(jì)算一個(gè)對隱變量更好的估計(jì),使其最大化log似然函數(shù)對隱變量Y的期望。用新計(jì)算的隱變量參數(shù)代替之前的對隱變量的估計(jì),進(jìn)行下一步的迭代!

             

             

            觀測數(shù)據(jù):觀測到的隨機(jī)變量X的IID樣本:

            image

            缺失數(shù)據(jù):未觀測到的隱含變量(隱變量)Y的值:

            image

            完整數(shù)據(jù): 包含觀測到的隨機(jī)變量X和未觀測到的隨機(jī)變量Y的數(shù)據(jù),Z=(X,Y)

             

            似然函數(shù):(似然函數(shù)的幾種寫法)

            JL})D_HBNI489~H}GCRMWVJ

            log似然函數(shù)為:

            image

            E step:用對隱變量的現(xiàn)有估計(jì)\boldsymbol\theta^{(t)}計(jì)算隱變量Y的期望

              image

            其中需要用到貝葉斯公式:

            image 

            M step:最大化期望,獲得對隱變量更好的估計(jì)

            image

             

            維基中的表述是這樣子:

            Given a statistical model consisting of a set \mathbf{X} of observed data, a set of unobserved latent data or missing values Y, and a vector of unknown parameters \boldsymbol\theta, along with a likelihood function L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta), the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data 

                   CR%M2I[QD88[N5$3(H))%ZR

            However, this quantity is often intractable.

            The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:

            Expectation step (E-step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of Y given \mathbf{X} under the current estimate of the parameters \boldsymbol\theta^{(t)}:

                   A7DFNWMY)KAI]T5)_OMKRUD

            Maximization step (M-step): Find the parameter that maximizes this quantity:
            \boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta} \operatorname{arg\,max} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \,

            Note that in typical models to which EM is applied:

            1. The observed data points \mathbf{X} may be discrete (taking one of a fixed number of values, or taking values that must be integers) or continuous (taking a continuous range of real numbers, possibly infinite). There may in fact be a vector of observations associated with each data point.
            2. The missing values (aka latent variables) Y are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
            3. The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).

            However, it is possible to apply EM to other sorts of models.

            The motivation is as follows. If we know the value of the parameters \boldsymbol\theta, we can usually find the value of the latent variables Y by maximizing the log-likelihood over all possible values of Y, either simply by iterating over Y or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables Y, we can find an estimate of the parameters \boldsymbol\theta fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both \boldsymbol\theta and Y are unknown:

            1. First, initialize the parameters \boldsymbol\theta to some random values.
            2. Compute the best value for Y given these parameter values.
            3. Then, use the just-computed values of Y to compute a better estimate for the parameters \boldsymbol\theta. Parameters associated with a particular value of Y will use only those data points whose associated latent variable has that value.
            4. Finally, iterate until convergence.

            The algorithm as just described will in fact work, and is commonly called hard EM. The K-means algorithm is an example of this class of algorithms.

            However, we can do somewhat better by, rather than making a hard choice for Y given the current parameter values and averaging only over the set of data points associated with a particular value of Y, instead determining the probability of each possible value of Y for each data point, and then using the probabilities associated with a particular value of Y to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM. The counts used to compute these weighted averages are called soft counts (as opposed to the hard counts used in a hard-EM-type algorithm such as K-means). The probabilities computed for Y areposterior probabilities and are what is computed in the E-step. The soft counts used to compute new parameter values are what is computed in the M-step.

            總結(jié):

            EM is frequently used for data clustering in machine learning and computer vision.

            EM會(huì)收斂到局部極致,但不能保證收斂到全局最優(yōu)。

            EM對初值比較敏感,通常需要一個(gè)好的,快速的初始化過程。

             

            這是我的Machine Learning課程,先總結(jié)到這里, 下面的工作是做一個(gè)GM_EM的總結(jié),多維高斯密度估計(jì)!

            posted on 2010-10-20 14:44 Sosi 閱讀(2513) 評論(0)  編輯 收藏 引用 所屬分類: Courses

            統(tǒng)計(jì)系統(tǒng)
            午夜精品久久久久久久| 久久精品女人天堂AV麻| 亚洲精品午夜国产VA久久成人| 久久人人爽人人爽人人爽| 奇米影视7777久久精品| 青青青伊人色综合久久| 人妻无码精品久久亚瑟影视| 久久久久亚洲av无码专区喷水 | 国产精品无码久久四虎| 午夜视频久久久久一区 | 久久精品久久久久观看99水蜜桃| 色诱久久久久综合网ywww| segui久久国产精品| 久久人妻少妇嫩草AV蜜桃| 青青青伊人色综合久久| 狠狠色噜噜色狠狠狠综合久久| 久久国产乱子伦免费精品| 亚洲精品97久久中文字幕无码| 国产一区二区精品久久| 中文无码久久精品| 婷婷国产天堂久久综合五月| 国产成人精品久久二区二区| 亚洲国产精品久久久天堂| 久久久久婷婷| 国产精品无码久久四虎| 午夜不卡888久久| 国产精品久久午夜夜伦鲁鲁| 精品久久人人爽天天玩人人妻| 久久99精品久久久久久噜噜| 亚洲国产成人久久精品动漫| 99久久精品影院老鸭窝| 无码人妻久久一区二区三区免费| 午夜精品久久影院蜜桃| 亚洲日本va午夜中文字幕久久 | 久久婷婷五月综合色99啪ak| 久久久国产精品网站| 久久成人影院精品777| 狠狠色噜噜狠狠狠狠狠色综合久久 | 99久久99这里只有免费费精品| 777午夜精品久久av蜜臀| 久久久久亚洲AV无码观看|