하자 각 파라미터를 갖는 감마 분포 갖는 상호 독립적 랜덤 변수 일 것을 보여 는Dirichlet(α1,α2,…,αk;αk+
( X 1 , … , X k + 1 ) = e − ∑ k + 1 i = 1 x i x α 1 − 1 1 … x α k + 1 − 1 k + 1의 합동 pdf 그 때는이 공동 PDF 발견(Y1,...,Y의K는+1)I는 코비안 즉 찾을 수J(X1,...,X를 k + 1
하자 각 파라미터를 갖는 감마 분포 갖는 상호 독립적 랜덤 변수 일 것을 보여 는Dirichlet(α1,α2,…,αk;αk+
( X 1 , … , X k + 1 ) = e − ∑ k + 1 i = 1 x i x α 1 − 1 1 … x α k + 1 − 1 k + 1의 합동 pdf 그 때는이 공동 PDF 발견(Y1,...,Y의K는+1)I는 코비안 즉 찾을 수J(X1,...,X를 k + 1
답변:
Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a multivariate change of variable. It would seem there's nothing for it but to write down a by matrix of derivatives and do the calculation.
There's a better way. It's shown at the end in the "Solution" section. Because the purpose of this post is to introduce statisticians to what may be a new method for many, much of it is devoted to explaining the machinery behind the solution. This is the algebra of differential forms. (Differential forms are the things that one integrates in multiple dimensions.) A detailed, worked example is included to help make this become more familiar.
한 세기 전에 수학자들은 differential algebra to work with the "higher order derivatives" that occur in multi-dimensional geometry. The determinant is a special case of the basic objects manipulated by such algebras, which typically are alternating multilinear forms. The beauty of this lies in how simple the calculations can become.
Here's all you need to know.
" with any variable name.
. 즉, 계수는functions of the variables.
. This product is anti-commutative (also called alternating): for any two one-forms and ,
This multiplication is linear and associative: in other words, it works in the familiar fashion. An immediate consequence is that , implying the square of any one-form is always zero. That makes multiplication extremely easy!
For the purposes of manipulating the integrands that appear in probability calculations, an expression like can be understood as .
When is a function, then its differential is given by differentiation:
The connection with Jacobians is this: the Jacobian of a transformation is, up to sign, simply the coefficient of that appears in computing
after expanding each of the as a linear combination of the in rule (5).
The simplicity of this definition of a Jacobian is appealing. Not yet convinced it's worthwhile? Consider the well-known problem of converting two-dimensional integrals from Cartesian coordinates to polar coordinates , where . The following is an utterly mechanical application of the preceding rules, where "" is used to abbreviate expressions that will obviously disappear by virtue of rule (3), which implies .
The point of this is the ease with which such calculations can be performed, without messing about with matrices, determinants, or other such multi-indicial objects. You just multiply things out, remembering that wedges are anti-commutative. It's easier than what is taught in high school algebra.
Let's see this differential algebra in action. In this problem, the PDF of the joint distribution of is the product of the individual PDFs (because the are assumed to be independent). In order to handle the change to the variables we must be explicit about the differential elements that will be integrated. These form the term . Including the PDF gives the probability element
(The normalizing constant has been ignored; it will be recovered at the end.)
Staring at the definitions of the a few seconds ought to reveal the utility of introducing the new variable
giving the relationships
This suggests making the change of variables in the probability element. The intention is to retain the first variables along with and then integrate out . To do so, we have to re-express all the in terms of the new variables. This is the heart of the problem. It's where the differential algebra takes place. To begin with,
Note that since , then
Consider the one-form
It appears in the differential of the last variable:
The value of this lies in the observation that
because, when you expand this product, there is one term containing as a factor, another containing , and so on: they all disappear. Consequently,
Whence (because all products disappear),
The Jacobian is simply , the coefficient of the differential product on the right hand side.
The transformation is one-to-one: its inverse is given by for and . Therefore we don't have to fuss any more about the new probability element; it simply is
That is manifestly a product of a Gamma distribution (for ) and a Dirichlet distribution (for ). In fact, since the original normalizing constant must have been a product of , we deduce immediately that the new normalizing constant must be divided by , enabling the PDF to be written