답변:
분포가 동일하고 두 표본이 모두 공통 분포와 무작위로 독립적으로 얻어진다는 귀무 가설 하에서 한 문자 값을 다른 문자 값과 비교하여 수행 할 수있는 모든 (결정적) 검정 의 크기를 계산할 수 있습니다. 이러한 테스트 중 일부는 분포의 차이를 탐지 할 수있는 합리적인 힘을 가지고있는 것으로 보입니다.
x 1 ≤ x 2 ≤ ⋯ ≤ x n 의 순서대로 된 숫자 의 요약에 대한 원래 정의 는 다음과 같습니다 [Tukey EDA 1977] :
임의의 숫자를 들어 에서 정의
하자 .
하자 및
-letter 요약 세트 인 이 요소는 각각최소, 하단 힌지, 중앙, 상단 힌지및최대로 알려져있습니다.
예를 들어, 데이터의 일괄 우리는 계산할 수 , 및 , 언제
경첩은 사 분위수에 가깝지만 (일반적으로 정확히 같지는 않음) 분위수 사용하는 경우, 일반적으로 그들은 이에 차 통계량 두 그리고 산술 평균 가중된다는 것을주의 간격들 중의 하나의 범위 내에 존재할 것이다 여기서, 로부터 결정될 수 상기 알고리즘이 사용 사 분위수를 계산합니다. 일반적으로 가 간격 인 경우 x q 느슨하게 씁니다. 어떤 가중 평균 등을 참조하는 및 .
두 데이터를 일괄 적으로 및 두 개의 분리 된 5 단어의 요약이있다. 우리는 모두 공통 분포의 랜덤 샘플 IID 것을 귀무 가설을 테스트 할 수 중 하나와 비교하여 - 문자 들 중 하나에 - 문자 . 예를 들어 x 의 상단 힌지를 비교할 수 있습니다.x 가 y 보다 현저히 작은 지 확인하기 위해 의 하단 힌지에 연결 합니다 . 이 기회를 계산하는 방법,
분수 및 경우 를 모르면 불가능합니다 . 그러나, 및 다음한층 유력한 이유로
이로써 우리는 보편적 인개별 오더 통계를 비교하는 오른손 확률을 계산하여 원하는 확률에 F와) 상한. 우리 앞에서 일반적인 질문은
n 번째 값 의 최고 값이 공통 분포에서 iid가 추출한 m 번째 값 의 r 번째 최고 값 보다 작을 확률은 얼마입니까?
확률이 개별 값에 너무 집중되어있을 가능성을 배제하지 않는 한 이것은 보편적 인 해답 이 아닙니다. 이는 가 연속 분포 여야 한다는 것을 의미 합니다. 이것은 가정이지만 약한 것으로 비모수 적입니다.
확률 변환 F 를 사용하여 모든 값을 다시 표현 하면 새로운 배치를 얻으 므로 분포 는 계산에 아무런 영향을 미치지 않습니다.
과
더욱이,이 재 발현은 단조롭고 증가하고있다 : 그것은 질서를 유지하고 그렇게함으로써 사건 을 보존한다 . F 는 연속적 이기 때문에 이러한 새로운 배치는 균일 [ 0 , 1 ] 분포 에서 도출됩니다 . 이 분포 하에서 이제 표기법에서 불필요한 " F "를 삭제하면 x q 에 Beta ( q , n + 1 − q ) = Beta ( q , ˉ q ) 분포 가 있음을 쉽게 알 수 있습니다 .
마찬가지로 의 분포 는 Beta ( r , m + 1 - r ) 입니다. 영역 x q < y r에 이중 통합을 수행함으로써 원하는 확률을 얻을 수 있습니다.
Because all values are integral, all the values are really just factorials: for integral The little-known function is a 정규화 된 초 지오메트리 기능 . 이 경우 길이 의 다소 간단한 교호 합으로 계산할 수 있으며 , 일부 계승에 의해 정규화됩니다.
This has reduced the calculation of the probability to nothing more complicated than addition, subtraction, multiplication, and division. The computational effort scales as By exploiting the symmetry
the new calculation scales as allowing us to pick the easier of the two sums if we wish. This will rarely be necessary, though, because -letter summaries tend to be used only for small batches, rarely exceeding
Suppose the two batches have sizes and . The relevant order statistics for and are and respectively. Here is a table of the chance that with indexing the rows and indexing the columns:
q\r 1 3 6 9 12
1 0.4 0.807 0.9762 0.9987 1.
3 0.0491 0.2962 0.7404 0.9601 0.9993
5 0.0036 0.0521 0.325 0.7492 0.9856
7 0.0001 0.0032 0.0542 0.3065 0.8526
8 0. 0.0004 0.0102 0.1022 0.6
A simulation of 10,000 iid sample pairs from a standard Normal distribution gave results close to these.
at with a chance of , and at with a chance of Which one to use depends on your thoughts about the alternative hypothesis. For instance, the test compares the lower hinge of to the smallest value of and finds a significant difference when that lower hinge is the smaller one. This test is sensitive to an extreme value of ; if there is some concern about outlying data, this might be a risky test to choose. On the other hand the test compares the upper hinge of to the median of . This one is very robust to outlying values in the batch and moderately robust to outliers in . However, it compares middle values of to middle values of . Although this is probably a good comparison to make, it will not detect differences in the distributions that occur only in either tail.
Being able to compute these critical values analytically helps in selecting a test. Once one (or several) tests are identified, their power to detect changes is probably best evaluated through simulation. The power will depend heavily on how the distributions differ. To get a sense of whether these tests have any power at all, I conducted the test with the drawn iid from a Normal distribution: that is, its median was shifted by one standard deviation. In a simulation the test was significant of the time: that is appreciable power for datasets this small.
Much more can be said, but all of it is routine stuff about conducting two-sided tests, how to assess effects sizes, and so on. The principal point has been demonstrated: given the -letter summaries (and sizes) of two batches of data, it is possible to construct reasonably powerful non-parametric tests to detect differences in their underlying populations and in many cases we might even have several choices of test to select from. The theory developed here has a broader application to comparing two populations by means of a appropriately selected order statistics from their samples (not just those approximating the letter summaries).
These results have other useful applications. For instance, a boxplot is a graphical depiction of a -letter summary. Thus, along with knowledge of the sample size shown by a boxplot, we have available a number of simple tests (based on comparing parts of one box and whisker to another one) to assess the significance of visually apparent differences in those plots.
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable -- you could look at something like an ECDF-type statistic - say some equivalent to a Kolmogorov-Smirnov-type statistic or something akin to an Anderson-Darling statistic (though of course the distribution of the statistic will be very different in this case).
The distribution for small samples will depend on the precise definitions of the quantiles used in the five number summary.
Consider, for example, the default quartiles and extreme values in R (n=10):
> summary(x)[-4]
Min. 1st Qu. Median 3rd Qu. Max.
-2.33500 -0.26450 0.07787 0.33740 0.94770
compared to those generated by its command for the five number summary:
> fivenum(x)
[1] -2.33458172 -0.34739104 0.07786866 0.38008143 0.94774213
Note that the upper and lower quartiles differ from the corresponding hinges in the fivenum
command.
By contrast, at n=9 the two results are identical (when they all occur at observations)
(R comes with nine different definitions for quantiles.)
The case for all three quartiles occurring at observations (when n=4k+1, I believe, possibly under more cases under some definitions of them) might actually be doable algebraically and should be nonparametric, but the general case (across many definitions) may not be so doable, and may not be nonparametric (consider the case where you're averaging observations to produce quantiles in at least one of the samples ... in that case the probabilities of different arrangements of sample quantiles may no longer be unaffected by the distribution of the data).
Once a fixed definition is chosen, simulation would seem to be the way to proceed.
Because it will be nonparametric at a subset of possible values of , the fact that it's no longer distribution free for other values may not be such a big concern; one might say nearly distribution free at intermediate sample sizes, at least if 's are not too small.
Let's look at some cases that should be distribution free, and consider some small sample sizes. Say a KS-type statistic applied directly to the five number summary itself, for sample sizes where the five number summary values will be individual order statistics.
Note that this doesn't really 'emulate' the K-S test exactly, since the jumps in the tail are too large compared to the KS, for example. On the other hand, it's not easy to assert that the jumps at the summary values should be for all the values between them. Different sets of weights/jumps will have different type-I error characteristics and different power characteristics and I am not sure what is best to choose (choosing slightly different from equal values could help get a finer set of significance levels, though). My purpose, then is simply to show that the general approach may be feasible, not to recommend any specific procedure. An arbitrary set of weights to each value in the summary will still give a nonparametric test, as long as they're not taken with reference to the data.
Anyway, here goes:
Finding the null distribution/critical values via simulation
At n=5 and 5 in the two samples, we needn't do anything special - that's a straight KS test.
At n=9 and 9, we can do uniform simulation:
ks9.9 <- replicate(10000,ks.test(fivenum(runif(9)),fivenum(runif(9)))$statistic)
plot(table(ks9.9)/10000,type="h"); abline(h=0,col=8)
# Here's the empirical cdf:
cumsum(table(ks9.9)/10000)
0.2 0.4 0.6 0.8
0.3730 0.9092 0.9966 1.0000
so at , you can get roughly (), and roughly (). (We shouldn't expect nice alpha steps. When the 's are moderately large we should expect not to have anything but very big or very tiny choices for ).
has a nice near-5% significance level ()
has a nice near-2.5% significance level ()
At sample sizes near these, this approach should be feasible, but if both s are much above 21 ( and ), this won't work well at all.
--
A very fast 'by inspection' test
We see a rejection rule of coming up often in the cases we looked at. What sample arrangements lead to that? I think the following two cases:
(i) When the whole of one sample is on one side of the other group's median.
(ii) When the boxes (the range covered by the quartiles) don't overlap.
So there's a nice super-simple nonparametric rejection rule for you -- but it usually won't be at a 'nice' significance level unless the sample sizes aren't too far from 9-13.
Getting a finer set of possible levels
Anyway, producing tables for similar cases should be relatively straightforward. At medium to large , this test will only have very small possible levels (or very large) and won't be of practical use except for cases where the difference is obvious).
Interestingly, one approach to increasing the achievable levels would be to set the jumps in the 'fivenum' cdf according to a Golomb-ruler. If the cdf values were and , for example, then the difference between any pair of cdf-values would be different from any other pair. It might be worth seeing if that has much effect on power (my guess: probably not a lot).
Compared to these K-S like tests, I'd expect something more like an Anderson-Darling to be more powerful, but the question is how to weight for this five-number summary case. I imagine that can be tackled, but I'm not sure the extent to which it's worth it.
Power
Let's see how it goes on picking up a difference at . This is a power curve for normal data, and the effect, del, is in number of standard deviations the second sample is shifted up:
This seems like quite a plausible power curve. So it seems to work okay at least at these small sample sizes.
What about robust, rather than nonparametric?
If nonparametric tests aren't so crucial, but robust-tests are instead okay, we could instead look at some more direct comparison of the three quartile values in the summary, such as an interval for the median based off the IQR and the sample size (based off some nominal distribution around which robustness is desired, such as the normal -- this is the reasoning behind notched box plots, for example). This should tend to work much better at large sample sizes than the nonparametric test which will suffer from lack of appropriate significance levels.
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change only 2 numbers, but clearly more numbers could be changed
set.seed(123)
#Create data
x <- rnorm(1000)
#Modify it without changing 5 number summary
x2 <- sort(x)
x2[100] <- x[100] - 1
x2[900] <- x[900] + 1
fivenum(x)
fivenum(x2)