# Types of Test Statistic (TS)

There are four main test statistics you can use in a hypothesis test. Which one you use depends on which statistical test you run.

Please refer to our earlier post to understand the terminologies of Hypothesis Testing.

The calculated test statistic (depending on which comparison test is being used) is compared to the critical statistic of that same distribution (such as Z, t, F) to decide whether to “reject” or “fail to reject” the null hypothesis.

## 1) Z-Score and Z-Test

### Z Score

A Z-score is a numerical measurement used in statistics of a value’s relationship to the mean (average) of a group of values, measured in terms of standard deviations from the mean.

If a Z-score is 0, it indicates that the data point’s score is identical to the mean score.

The z-score is positive if the value lies above the mean, and negative if it lies below the mean.

Z-score requires population parameters (mean, standard deviation)

### Z -Test

z tests are a statistical way of testing a hypothesis when either:

• We know the population variance, or
• We do not know the population variance but our sample size is large n ≥ 30

Z-test works on sample parameters (mean, standard deviation, standard error).

One-Sample Z test:

We perform the One-Sample Z test when we want to compare a sample mean with the population mean.

Here’s an Example to Understand a One-Sample Z Test

Let’s say we need to determine if girls on average score higher than 600 in the exam. We have the information that the standard deviation for girls’ scores is 100. So, we collect the data of 20 girls by using random samples and record their marks. Finally, we also set our ⍺ value (significance level) to be 0.05.

In this example:

• Mean Score for Girls is 641
• The size of the sample is 20
• The population mean is 600
• Standard Deviation for Population is 100

Since the P-value is less than 0.05, we can reject the null hypothesis and conclude based on our result that Girls on average scored higher than 600.

## 2) t-Score and T-Test:

T-Score: The t score formula enables you to take an individual score and transform it into a standardized form > one which helps you to compare scores.

Here, x̄ is the sample mean, μ is again the population mean, s is the sample SD, and n is the number of data points.

The ratio between the difference between two groups and the difference within the group is known as T-score. Greater is the t-score, more is the difference between groups, and smaller is the t-score, more similarities are there among groups.

For example, a t-score value of 2 indicates that the groups are two times as different from each other as they are with each other.

T-Test: t-tests are a statistical way of testing a hypothesis when:

• We do not know the population variance
• Our sample size is small, n < 30

In order to know how significant the difference between two groups is, a T-test is used, basically, it tells that difference (measured in means) between two separate groups could have occurred by chance.

One-Sample t-Test

We perform a One-Sample t-test when we want to compare a sample mean with the population mean. The difference from the Z Test is that we do not have the information on Population Variance here. We use the sample standard deviation instead of the population standard deviation in this case.

Here’s an Example to Understand a One-Sample t-Test

Let’s say we want to determine if on average girls score more than 600 in the exam. We do not have the information related to variance (or standard deviation) for girls’ scores. To a perform t-test, we randomly collect the data of 10 girls with their marks and choose our ⍺ value (significance level) to be 0.05 for Hypothesis Testing.

In this example:

• Mean Score for Girls is 606.8
• The size of the sample is 10
• The population mean is 600
• Standard Deviation for the sample is 13.14

Our P-value is greater than 0.05 thus we fail to reject the null hypothesis and don’t have enough evidence to support the hypothesis that on average, girls score more than 600 in the exam.

## 3) ANOVA and F-statistics:

An acronym for Analysis of Variance (ANalysis OVAriance) was developed by statistician and evolutionary biologist Ronald Fisher. The F value is used in ANOVA. It is calculated by dividing two mean squares that determine if more than two population means are equal.

To perform an ANOVA, you must have a continuous response variable and at least one categorical factor with two or more levels.

The procedure works by comparing the variance between group means versus the variance within groups (error or noise in the experiment) as a way of determining whether the groups are all part of one larger population, or are separate (statistically different) populations with different characteristics.

The test uses the F-distribution (probability distribution) function and information about the variances of each population (within) and grouping of populations (between) to help decide if variability between and within each population are significantly different.

Example: To study the effectiveness of different diabetes medications, scientists design and experiment to explore the relationship between the type of medicine and the resulting blood sugar level. The sample population is a set of people. We divide the sample population into multiple groups, and each group receives a particular medicine for a trial period. At the end of the trial period, blood sugar levels are measured for each of the individual participants. Then for each group, the mean blood sugar level is calculated. ANOVA helps to compare these group means to find out if they are statistically different or if they are similar.

The outcome of ANOVA is the ‘F statistic’.