최소의 조인트 누적 분포 함수 및 최대 X ( N ) 의 시료에 대한 n은 평균 가우시안 분포 μ 및 표준 편차 σ는 인x(1)x(n)nμσ
에프( x( 1 ), x( n ); μ , σ) = Pr ( X( 1 )< x( 1 ), X( n )< x( n ))= Pr ( X( n )< x( n ))−Pr(X(1)>x(1),X(n)<x(n)=Φ(x(n)−μσ)n−[Φ(x(n)−μσ)−Φ(x(1)−μσ)]n
여기서 는 표준 가우스 CDF입니다. x ( 1 ) & x ( n )에 대한 미분 은 결합 확률 밀도 함수를 제공합니다Φ ( ⋅ )엑스( 1 )엑스( n )
에프( x( 1 ), x( n ); μ , σ) =n ( n - 1 ) [ Φ ( x( n )− μσ) −Φ ( x( 1 )− μσ) ]n - 2⋅ ϕ ( x( n )− μσ) ⋅ϕ ( x( 1 )− μσ) ⋅ 1σ2
여기서 는 표준 가우스 PDF입니다. 모수를 포함하지 않는 로그를 삭제하고 항을 빼면 로그 우도 함수가 제공됩니다.ϕ ( ⋅ )
ℓ(μ,σ;x(1),x(n))=(n−2)log[Φ(x(n)−μσ)−Φ(x(1)−μσ)]+logϕ(x(n)−μσ)+logϕ(x(1)−μσ)−2logσ
This doesn't look very tractable but it's easy to see that it's maximized whatever the value of σ by setting μ=μ^=x(n)+x(1)2, i.e. the midpoint—the first term is maximized when the argument of one CDF is the negative of the argument of the other; the second & third terms represent the joint likelihood of two independent normal variates.
Substituting μ^ into the log-likelihood & writing r=x(n)−x(1) gives
ℓ(σ;x(1),x(n),μ^)=(n−2)log[1−2Φ(−r2σ)]−r24σ2−2logσ
This expression has to be maximized numerically (e.g. with optimize
from R's stat
package) to find σ^. (It turns out that σ^=k(n)⋅r, where k is a constant depending only on n—perhaps someone more mathematically adroit than I could show why.)
Estimates are no use without an accompanying measure of precision. The observed Fisher information can be evaluated numerically (e.g. with hessian
from R's numDeriv
package) & used to calculate approximate standard errors:
I(μ)=−∂2ℓ(μ;σ^)(∂μ)2∣∣∣μ=μ^
I(σ)=−∂2ℓ(σ;μ^)(∂σ)2∣∣∣σ=σ^
It would be interesting to compare the likelihood & the method-of-moments estimates for σ in terms of bias (is the MLE consistent?), variance, & mean-square error. There's also the issue of estimation for those groups where the sample mean is known in addition to the minimum & maximum.