US 7146289 B2 Abstract The invention concerns a method to evaluate whether a statistical time delay (TD) between a first event and a second event of a device under test is better than a test limit (TL). The method includes the steps: performing a minimum number N of tests and evaluating the time delay (TD) from each test; modeling a first probability distribution (P
1) of the evaluated time delays (TD); obtaining a second probability distribution (P2) of the evaluated time delays (TD); performing a statistical transformation in order to obtain a third probability distribution (P3) of the evaluated time delays (TD); and deciding to pass the device under test, if a certain percentage of the area of the third probability distribution (P3) is on a good side (GS) of the test limit (TL2).Claims(11) 1. A method to evaluate whether a statistical time delay (TD) between a first event (CS) and a second event (RM) of a device under test (DUT) is better than a test limit (TL) comprising the steps
Performing a minimum number N of tests and evaluating the individual time delay (TD) from each test,
Modeling a first probability distribution (P
1) of the evaluated time delays (TD) as a function of the elapsed time from the first occurrence of the first event (CS) to the first occurrence of the second event (RM),Obtaining a second probability distribution (P
2) of the evaluated time delays (TD) as a function of the elapsed time from the first occurrence of the first event (CS) to the N-th occurrence of the second event (RM) by performing the N-1-th self convolution of the first probability distribution (P1),Performing a statistical transformation (ST) in order to obtain a third probability distribution (P
3) of the evaluated time delays (TD) as a function of the N-th occurrence of the second event (RM),Deciding to pass the device under test (DUT), if a certain percentage of the area of the third probability distribution (P
3) is on a good side (GS) of the test limit (TL2), orDeciding to fail the device under test (DUT), if a certain percentage of the area of the third probability distribution (P
3) is on a bad side (BS) of the test limit (TL2), otherwiseRepeating the steps of the method with an incremented number N of tests.
2. Method according to
1).3. Method according to
4. Method according to
1,; Cell 2) of the cellular mobile communication system as a response to the cell quality swap (CS).5. Method according to
1).6. Method according to
7. Method according to
8. Method according to
9. Method according to
10. Method according to
_{n}=(1−a)·F_{n-1}+a·M_{n }whereby F_{n }is the updated filtered measurement result, F_{n-1 }is the old filtered measurement result, M_{n }is the latest received measurement result and a=½^{(k/2)}, where k is the free parameter.11. Method according to
Description This application is the National Stage of International Application No. PCT/EP03/07421, filed Jul. 9, 2003. The invention concerns a method to evaluate whether a time delay as a implementation dependent parameter is better than a statistically defined soft test limit. The invention is especially applied to a class of measurements measuring the delay time from a cell quality swap, generated by a system simulator, to the registration message, generated by the user equipment, a mobile station for example. In a mobile communication system the mobile station (user equipment) should make a cell reselection or handover to another base station of another communication cell if the quality of communication with the current base station of the current cell (cell quality) decreases and the communication quality with an other base station of an other cell increases over the quality of the current base station. Such a soft handover handled by a mobile station in a communication system with several base stations for a mobile system of the third generation using Code Division Multiple Access (CDMA) is known from U.S. Pat. No. 5,267,261 for example. The communication standard defines a maximum delay time (test limit) from the swap cell quality until the time where the user equipment issues a registration message in order to register to the other base station. However, this test limit is not defined as a hard limit, i. e. the user equipment would not fulfil the test requirement if the delay time exceeds the time limit only a single time, but is defined as a soft limit, i. e. the user equipment shall fulfil the test requirement for a certain percentage (for example 90%) of the cases in repeated measurements. The pass fail decision of the user equipment against the soft limit shall be done with a certain quality, for example 5% wrong decision risk. From the present state of the art it is not known how to deal with such statistically defined soft limits for repeated tests. It is the object of the present invention to provide an effective method to measure a parameter, especially time delay, against a statistically defined soft limit. A method to evaluate whether a statistical time delay (TD) between a first event (CS) and a second event (RM) of a device under test (DUT) is better than a test limit (TL) comprises the following steps: (a) performing a minimum number N of tests and evaluating the individual time delay (TD) from each test; (b) modeling a first probability distribution (P The invention is further described with respect to the drawings. In the drawings The result of the activities of the User Equipment UE shall be measured by the System Simulator SS. The test measures the delay time DT from a cell quality swap CS, generated by the System Simulator SS to the registration message RM, generated by the User Equipment UE. There is a test limit TL for the delay time DT. The delay time DT shall be <8 s for example. However this is not a hard limit. The limit shall be fulfilled in 90% of the cases in repeated measurements. The pass fail decision for the User Equipment UE against this soft limit shall be done with a certain quality, for example 5% wrong decision risk. This task is totally new for testing mobile systems. In the following a summary of the inventive measurement strategy is given. The task is of statistical nature. Statistical tasks up to now (for example for BER BLER tests as described in earlier application PCT/EP02/02252) could be based on a well accepted distribution function, for example of Chi Square-nature, where just the parameter of the distribution is implementation dependent. This is not possible here because: -
- 1. The distribution function must be developed in advance or during the test.
- 2. It can be foreseen, that the distribution function is not a classical one (Binomial. Gauss, Poisson . . . )
- 3. It can be foreseen, that the distribution function is implementation dependent, e. g. not only the parameter, even the nature is implementation dependent.
To solve the delay statistic task, 3 nearly independent sub-tasks are needed: - 1) With a meaningful implementation assumption for the User Equipment UE with an error model EM for the User Equipment UE and by considering the specific test procedure and test signals, a model for activities inside the User Equipment UE is derived. Those activities measure the quality of several cells, process this information in order to detect the cell quality swap CS and finally generate the registration message RM. The model is described by parameters, some of them are free for variation.
From the model a time dependent probability, that the DUT's decision “register” occurs, is derived (see - 2)The delay time TD is repeatedly measured and establish a probability distribution of the delay times. The modelled distribution is fitted towards the measured distribution, using the above mentioned free parameters, and for the future the modelled and fitted distribution is used. This is a preliminary one and can tell, which percentage is below or above the limit TL of for example 8 s. However, if pass or fail is decided, a quality of that decision in terms of wrong decision risk is invisible. With mathematical methods the latest modelled and fitted distribution is transformed into another distribution, which directly shows the quality of the decision: The result of this statistical transformation ST is the probability to find 1, 2, . . . , N decisions in a given time, where the given time is the accumulation of measured and fitted delay time DT. This distribution allows a decision and in addition gives a quality of the decision.
If 95 % of the distribution is on the good side GS of the limit TL - 3)The third task is to define a measurement strategy, using the result of step 2), to derive an early and reliable pass fail decision. This is 4fold:
- a) Grouping the measured delays into meaningful classes, such that a delay distribution becomes visible.
- b) Finding rules to fit the model to the measurement.
- c) Interpreting the distribution, derived in step 2), to derive pass fail decisions or otherwise continue the test.
- d) Embedding steps a) to c) into an recursive process, with the goal to find the decision after the minimum possible time.
It is the target of the invention to gain a pass/fail decision for the User Equipment UE based on the exceed-8 s-ratio of 10% in repeated delay tests. This pass fail decision shall be done with a certain quality, e.g 5% wrong decision risk. This decision shall be achieved after the minimum possible repetitions of delay tests. The pass fail decision with e.g. 5% wrong decision risk can be made using the distribution of time delays TD. The distribution of time delays TD is an approximation and is generated by exploiting apriori information about the process causing the time delay TD (as much as possible) and by taking measurement samples of the time delay TD (as few as possible). The approximation process needs a compromise as follows: Modelling the process, causing the time delay TD, as detailed as possible, needs many parameters. Fitting the model towards the measurement needs a number of measurements, a magnitude higher than the number of parameters, describing the model. So, in order to finalise the test after a few repetitions, it is necessary to describe the model with a few parameters and give away some fidelity of the distribution. At the end there are two components of uncertainty: Uncertainty that the used distribution is the correct one, and wrong decision risk based on the used distribution. An implementation of a model for the user equipment's activity can be seen in In the example shown in This implementation is part of the process, causing the time delay TD. It shall be described by a realistic structure and an appropriate number of parameters, high enough to model the process near to reality, and low enough to fit the model to a low number of measured samples. The preferred implementation is related to a filter which is characterised by an IIR (Infinite Impulse Response)—structure with one free parameter k. The User Equipment UE shall apply filtering of the measurements for that measurement quantity according to the following formula:
The variables in the formula are defined as follows: - F
_{n }is the updated filtered measurement result - F
_{n-1 }is the old filtered measurement result - M
_{n }is the latest received measurement result from physical layer measurements. - a=½
^{(k/2)}, where k is a parameter.
There are additional contributors to the process, causing the delay. There are several errors sources which randomise the delay: level errors (corrupting the comparison, 3 dB better or not)and delay impacts. One delay impact is scheduling delay. Cell swap CS by the system simulator SS and the first physical measurement of the User Equipment UE are uncorrelated. This results in a random delay. It is modelled by a random equally distributed delay. The according differential distribution function is rectangular with the following property: Delay=1/Probab=S. This is a first free parameter S. A second impact is the processing delay in the User Equipment UE. This results in a deterministic and/or random delay. The random part is modelled with S. The deterministic part is modelled with a constant processing delay PD. This is a second free parameter PD. Level errors are caused by additional random. This is mainly the externally AWGN (Additive White Gaussian Noise)-channel, but internal receiver noise as well. It is modelled with a gaussian distribution and its standard deviation σ. This is a third free parameter σ. Further level errors are caused by linear distortion. The measurement is subject to a memory in the User Equipment UE. The memory smoothes the random errors, however it distorts the measurement of the present physical measurement. This is harmless in a static situation but harmful as long as results from physical measurements before the cell swap CS are in the memory. The effect of this contribution is achieved by passing the signals through the assumed implementation. This is a fourth free parameter k. Other level errors are caused by a linearity error. This is modelled with a deviation parameter. This is a fifth free parameter L. Wrongly measured signals or test signal levels apart from the defined ones or a shifted decision limit have related effects. Offset and non-linear distortion could be considered but for simplicity and to save free parameters the preferred implementation will not. In contrast to the delay impact, the level error impact causes Dirac shaped differential probabilities, time spaced with 1 DRX cycle, according to the implementation assumption. The result of this consideration is a time dependent probability for a decision e. g. “3 dB better”. This is achieved by convoluting the differential probabilities from the scheduling delay with the Diracs from the Level error impact and shifting all by the processing delay. This probability is low, shortly after the cell quality swap CS, then it increases. After the memory length of the filter it reaches a higher, constant probability. The preferred implementation is based on the above 5 free parameters S, PD, σ, k and L. Consequently the minimum number of delay measurements must be approximately one magnitude higher. A minimum number of delay measurements of 25 is proposed. The time dependent probability as shown in
- P
**1**(T): Probability inFIG. 6 , T=horizontal axis (time) - Po (t): Probability in
FIG. 5 , t=horizontal axis (time)
Resolution of time: e.g. 1DXR cycle As mentioned above it is proposed to fit the model towards the measurement and reuse the modelled and fitted distribution. The constant parameter is 1 This distribution can be confirmed by an increasing number of delay measurements and converges to a final shape by an infinite number of measurements. Entering the test limit (e. g. 8 s) into the distribution, the exceed-8 s-ratio can be seen. For a finite number of measurements it is a preliminary ratio. However, if it is decided based on this preliminary exceed-8 s-ratio, a wrong decision risk (confidence level) can not be stated. We proceed to a suitable distribution by another 2 step transformation: 1 The probability distribution P The constant parameter of the probability distribution P 2 Swapping constant parameter and variable, we proceed to our final distribution: Given a certain time duration, what is the probability to get 1, 2, . . . , N decisions? This reflects the measurement problem: We count the number of decisions or registration messages RM and we accumulate the time to that decision or message RM. We ask for the probability distribution P Thus the constant parameter of the distribution is time, i. e. number of DRX cycles. The variable input of the distribution is 1 Also this distribution P This strategy gives a decision, if the mean value of distribution P - a) evaluate the mean value and the 90/10% value of distribution P
**1** - b) Shift P
**1**, such that the mean value of the sifted distribution P**1**hits the 90/10% value of the initial P**1**(P**1**→P**1**′). - c) perform the statistical transformation P
**1**′→P**3**′ and decide against the test limit TL**2**.
TL limit
N Accumulated test time is the accumulated test time up to the current state. TD The following analogy might help to understand the origin of the statistical transformation. It is known from the binomial distribution, that it can be derived from the following elementary probability distribution:
The probability for the occurrence of an event is p, (p=1/6 to toss a “1” with a fair die). The complementary probability is q=1−p. (q=5/6 not to toss a “1” with a fair die). P(ns,p) describes the differential distribution of number of tosses to find the first event, (Number of tosses ns to the first “1”). ns is variable, p is constant. To find statistically the number of tosses until the ne Swap of parameter (ne) and variable (ns) and introducing some offset generates exactly a set of binomial distributions
With the same method the exponential distribution can be transformed exactly into a set of Poisson distributions. This analogy can be exploited to transform the non classical delay distribution P
However, in contrast to formula (4), The measurement strategy is as follows - 1) Performing a minimum number of delay tests, e. g. 25.
- 2) Grouping the time delays TD obtained from the general delay tests. All the individual delay tests are grouped into classes CL
_{1 }. . . CL_{8 }as indicated inFIG. 10 . The result is normalised such that the area is 1. - 3) The error model is fitted towards the measurement using the free parameters. The best fit criterion is the minimum RMS (Root, Mean, Square)-difference.
- 4) Shifting the last modelled distribution P
**1**(mean value→e. g. 90/10% value). - 5) The latest model according to the statistical transformation is generated, i. e. the probability distribution P
**2**shown inFIG. 7 is generated from the probability distribution P**1**shown inFIG. 6 by several self convolutions. The probability distribution P**3**shown inFIG. 8 is generated from the probability distribution P**2**shown inFIG. 7 by swapping constant and variable. - 6) If more than a certain percentage, e. g. 95%, of the area of the probability distribution P
**3**shown inFIG. 8 is on the good side GS of the test limit TL**2**with or without a relaxing correction to the test limit TL**2**the test is stopped and passed. - 7) If more than a certain percentage, e. g. 95%, of the area of the probability distribution P
**3**shown inFIG. 8 is on the bad side BS of the test limit TL**2**without correction the test is stopped and failed. - 8) The next delay test is performed and the procedure is set forth with step 2).
This measurement strategy contains the fit process. Fitting the model towards the measurement is a large computational effort following each delay measurement. However, as indicated this introduces a priori knowledge into the test procedure, which need not to be gained by a large amount of measurement repetitions. Consequently this saves test time. As already indicated the distribution is implementation dependent. Fitting the model towards the measurement regards in a limited scope individual implementations in a standardised test procedure. However, the procedure works successfully just by using the measured distribution. It then reads as follows: - 1) Performing a minimum number of delay tests, e. g. 25.
- 2) Grouping the time delays TD obtained form the general delay tests. All the individual delay tests are grouped into classes CL
_{1 }. . . CL_{8 }as indicated inFIG. 10 . The result is normalised such that the area is 1. - 3)Shifting of the latest modelled distribution (mean value→e. g. 90/10% value).
- 4) The latest measured distribution is transformed according to the statistical transformation, i. e. the probability distribution P
**2**shown inFIG. 7 is generated from the probability distribution P**1**shown inFIG. 6 several self convolutions in the example. The probability distribution P**3**shown inFIG. 8 is generated from the probability distribution P**2**shown inFIG. 7 by swapping constant and variable. - 5) If more than a certain percentage, e. g. 95%, of the area of the probability distribution P
**4**shown inFIG. 8 is on the good side GS of the test limit TL**2**with or without a relaxing correction to the test limit TL**2**the test is stopped and passed. - 6) If more than a certain percentage, e. g. 95%, of the area of the probability distribution P
**3**shown inFIG. 8 is on the bad side BS of the test limit TL**2**without correction the test is stopped and failed. - 7) The next delay test is performed and the procedure is set forth with step 2).
Note that the optional relaxing correction has the following function: With increasing number of measurements the probability distribution P The pass fail decision for the UE based on the exceed-8 s ratio=10% is done after the minimum possible repetitions of delay measurements. The decision quality is restricted by two components: wrong decision probability, based on the latest distribution, and uncertainty about the fidelity of the that distribution. A User Equipment UE near the test limit needs the longest test time, however it is finite. A very good User Equipment UE is passed very early. A very bad User Equipment UE is failed very early. While illustrative embodiments have been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention. Patent Citations
Classifications
Legal Events
Rotate |