감마 분포를 이용한 디 리클 렛 분포 구성


18

하자 X1,,Xk+1 각 파라미터를 갖는 감마 분포 갖는 상호 독립적 랜덤 변수 일 αi,i=1,2,,k+1 것을 보여 Dirichlet(α1,α2,,αk;αk+Yi=XiX1++Xk+1,i=1,,kDirichlet(α1,α2,,αk;αk+1)

( X 1 , , X k + 1 ) = e k + 1 i = 1 x i x α 11 1x α k + 11 k + 1의 합동 pdf 그 때는이 공동 PDF 발견(Y1,...,Y의K는+1)I는 코비안 즉 찾을 수J(X1,...,X를 k + 1(X1,,Xk+1)=ei=1k+1xix1α11xk+1αk+11Γ(α1)Γ(α2)Γ(αk+1)(Y1,,Yk+1)J(x1,,xk+1y1,,yk+1)


3
Have a look at pages 13-14 of this document.

@Procrastinator Thank you very much your document is best answer for my question.
Argha

2
@Procrastinator - perhaps you should put this as an answer, since the OP is happy with it, and add a couple of sentences so you don't trip the "we want more than one-sentence answer" warning?
jbowman

4
That document now is a non-answer because it's a 404.
whuber

2
Wayback machine to the rescue: pdf
mobeets

답변:


30

Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a multivariate change of variable. It would seem there's nothing for it but to write down a k+1 by k+1 matrix of derivatives and do the calculation.

There's a better way. It's shown at the end in the "Solution" section. Because the purpose of this post is to introduce statisticians to what may be a new method for many, much of it is devoted to explaining the machinery behind the solution. This is the algebra of differential forms. (Differential forms are the things that one integrates in multiple dimensions.) A detailed, worked example is included to help make this become more familiar.


Background

한 세기 전에 수학자들은 differential algebra to work with the "higher order derivatives" that occur in multi-dimensional geometry. The determinant is a special case of the basic objects manipulated by such algebras, which typically are alternating multilinear forms. The beauty of this lies in how simple the calculations can become.

Here's all you need to know.

  1. dxid" with any variable name.

  2. dx1+dx2x2dx1exp(x2)dx2 . 즉, 계수는functions of the variables.

  3. . This product is anti-commutative (also called alternating): for any two one-forms ω and η,

    ωη=ηω.

    This multiplication is linear and associative: in other words, it works in the familiar fashion. An immediate consequence is that ωω=ωω, implying the square of any one-form is always zero. That makes multiplication extremely easy!

  4. For the purposes of manipulating the integrands that appear in probability calculations, an expression like dx1dx2dxk+1 can be understood as |dx1dx2dxk+1|.

  5. When y=g(x1,,xn) is a function, then its differential is given by differentiation:

    dy=dg(x1,,xn)=gx1(x1,,xn)dx1++gx1(x1,,xn)dxn.

The connection with Jacobians is this: the Jacobian of a transformation (y1,,yn)=F(x1,,xn)=(f1(x1,,xn),,fn(x1,,xn)) is, up to sign, simply the coefficient of dx1dxn that appears in computing

dy1dyn=df1(x1,,xn)dfn(x1,,xn)

after expanding each of the dfi as a linear combination of the dxj in rule (5).


Example

The simplicity of this definition of a Jacobian is appealing. Not yet convinced it's worthwhile? Consider the well-known problem of converting two-dimensional integrals from Cartesian coordinates (x,y) to polar coordinates (r,θ), where (x,y)=(rcos(θ),rsin(θ)). The following is an utterly mechanical application of the preceding rules, where "()" is used to abbreviate expressions that will obviously disappear by virtue of rule (3), which implies drdr=dθdθ=0.

dxdy=|dxdy|=|d(rcos(θ))d(rsin(θ))|=|(cos(θ)drrsin(θ)dθ)(sin(θ)dr+rcos(θ)dθ|=|()drdr+()dθdθrsin(θ)dθsin(θ)dr+cos(θ)drrcos(θ)dθ|=|0+0+rsin2(θ)drdθ+rcos2(θ)drdθ|=|r(sin2(θ)+cos2(θ))drdθ)|=r drdθ.

The point of this is the ease with which such calculations can be performed, without messing about with matrices, determinants, or other such multi-indicial objects. You just multiply things out, remembering that wedges are anti-commutative. It's easier than what is taught in high school algebra.


Preliminaries

Let's see this differential algebra in action. In this problem, the PDF of the joint distribution of (X1,X2,,Xk+1) is the product of the individual PDFs (because the Xi are assumed to be independent). In order to handle the change to the variables Yi we must be explicit about the differential elements that will be integrated. These form the term dx1dx2dxk+1. Including the PDF gives the probability element

fX(x,α)dx1dxk+1(x1α11exp(x1))(xk+1αk+11exp(xk+1))dx1dxk+1=x1α11xk+1αk+11exp((x1++xk+1))dx1dxk+1.

(The normalizing constant has been ignored; it will be recovered at the end.)

Staring at the definitions of the Yi a few seconds ought to reveal the utility of introducing the new variable

Z=X1+X2++Xk+1,

giving the relationships

Xi=YiZ.

This suggests making the change of variables xiyiz in the probability element. The intention is to retain the first k variables y1,,yk along with z and then integrate out z. To do so, we have to re-express all the dxi in terms of the new variables. This is the heart of the problem. It's where the differential algebra takes place. To begin with,

dxi=d(yiz)=yidz+zdyi.

Note that since Y1+Y2++Yk+1=1, then

0=d(1)=d(y1+y2++yk+1)=dy1+dy2++dyk+1.

Consider the one-form

ω=dx1++dxk=z(dy1++dyk)+(y1++yk)dz.

It appears in the differential of the last variable:

dxk+1=zdyk+1+yk+1dz=z(dy1++dyk)+(1y1yk)dz=dzω.

The value of this lies in the observation that

dx1dxkω=0

because, when you expand this product, there is one term containing dx1dx1=0 as a factor, another containing dx2dx2=0, and so on: they all disappear. Consequently,

dx1dxkdxk+1=dx1dxkzdx1dxkω=dx1dxkz.

Whence (because all products dzdz disappear),

dx1dxk+1=(zdy1+y1dz)(zdyk+ykdz)dz=zkdy1dykdz.

The Jacobian is simply |zk|=zk, the coefficient of the differential product on the right hand side.


Solution

The transformation (x1,,xk,xk+1)(y1,,yk,z) is one-to-one: its inverse is given by xi=yiz for 1ik and xk+1=z(1y1yk). Therefore we don't have to fuss any more about the new probability element; it simply is

(zy1)α11(zyk)αk1(z(1y1yk))αk+11exp(z)|zkdy1dykdz|=(zα1++αk+11exp(z)dz)(y1α11ykαk1(1y1yk)αk+11dy1dyk).

That is manifestly a product of a Gamma(α1++αk+1) distribution (for Z) and a Dirichlet(α) distribution (for (Y1,,Yk)). In fact, since the original normalizing constant must have been a product of Γ(αi), we deduce immediately that the new normalizing constant must be divided by Γ(α1++αk+1), enabling the PDF to be written

fY(y,α)=Γ(α1++αk+1)Γ(α1)Γ(αk+1)(y1α11ykαk1(1y1yk)αk+11).
당사 사이트를 사용함과 동시에 당사의 쿠키 정책개인정보 보호정책을 읽고 이해하였음을 인정하는 것으로 간주합니다.
Licensed under cc by-sa 3.0 with attribution required.