Parametric Tests: Definition and Characteristics
The larger the sample, the more accurate the estimate in parametric tests. Conversely, the smaller the sample, the more distorted the sample mean will be by extreme odd values.
Parametric tests are statistical significance tests that quantify the association or independence between a quantitative variable and a categorical variable (1). Remember that a categorical variable is one that divides individuals into groups. However, this type of test requires certain prerequisites for its application. What are they?
For example, let’s say that you want to compare two groups. To check whether you can apply parametric tests, you’ll first have to check whether the distribution of the groups in the quantitative variable is normal.
In addition, you’ll also have to check the homogeneity of variances in the populations from which the groups are drawn. Finally, the number of subjects, called “n” in statistics, will have to be greater than 30 per group. The results of the hypothesis testing are more valid by the fact that the groups are balanced.
If these requirements aren’t met, you resort to non-parametric tests. If they’re met, then you can use parametric tests: the t-test is for one sample or for two related or independent samples. Then there’s the ANOVA test for more than two independent samples.
Conditions for their application
There are many investigations that have to determine how things are related. This means that they need to know whether the variables relate to each other or not. In any case, we need to know a few things before applying the tests. Therefore, the requirements to be able to use these parametric tests are:
The study variable has to be numerical for parametric tests
That is, the dependent variable must be measured on a scale that’s at least interval. It’s even better if it’s a ratio scale.
Basically, the values of the dependent variable should follow a normal distribution. This should occur, at least, in the population belonging to the sample.
The normal or Gaussian distribution (due to the Gaussian bell) is the best-studied theoretical distribution. What’s more, it owes its importance mainly to the frequency with which different variables associated with natural and everyday phenomena approximately follow this distribution. Some examples, such as weight or psychological traits such as IQ, are examples of variables that researchers assume follow a normal distribution.
Homoscedasticity (homogeneity of variances) between the two compared groups
The variances of the dependent variable in the groups being compared should be more or less equal. Therefore, it’s necessary to know whether this homogeneity of variances is met, since the formulation to be used in the contrast of means depends on it. Some tests that allow you to compare this homogeneity of variances are:
- Levene’s test
- Fisher’s F test
- Hartley’s Fmax test
- Barlett’s test
The sample “n”
The “n” is the size of the population. In this case, the sample population size can’t be less than 30, and it’ll be better the closer it is to the “n” of the whole population.
Therefore, the larger the sample, the more accurate the estimate will be. Conversely, the smaller the sample, the more distorted the sample mean will be by extreme outliers.
Types of parametric tests
Depending on the contrast, there are two different tests possible:
Type of contrast Tests Tests
One sample t Test
Two independent samples t Test for two independent samples
Two related samples t-test for related data
More than two independent samples ANOVA
One sample t-test
The one sample t-test is concerned with testing whether the mean of a population differs significantly from a given known or hypothesized value. Therefore, the test calculates descriptive statistics for the contrast variables together with the t-test.
T-test for two independent samples in parametric tests
Researchers use this test when the comparison is between the means of two independent populations. That is, the individuals of one of the populations are different from the individuals of the other. An example of this is a comparison between males and females.
T-test for two related samples
This test is another alternative for contrasting two means. It mainly refers to the case in which the two populations aren’t independent. In this case, you’re dealing with populations that are related to each other. This situation occurs, for example, when experts observe a group of individuals before and after a given intervention.
The ANOVA test for more than two independent samples
In the case of comparing more than two samples, we’ll have to resort to analysis of variance or ANOVA. This is a statistical test that simultaneously compares the means of more than two populations.
These tests are very common in psychology research, and they’re often misused. However, you must always remember their prerequisites. This will indicate whether you can use parametric tests or whether you must resort to non-parametric tests.It might interest you...
All cited sources were thoroughly reviewed by our team to ensure their quality, reliability, currency, and validity. The bibliography of this article was considered reliable and of academic or scientific accuracy.
Hurtado, M. J. R., & Silvente, V. B. (2012). Cómo aplicar las pruebas paramétricas bivariadas t de Student y ANOVA en SPSS. Caso práctico. REIRE, 5(2).
- Ferrán Aranaz, M. (2002) Curso de SPSS para Windows. Madrid: McGraw-Hill.
- Pérez Juste, R., García Llamas, J.L., Gil Pascual, J.A. y Galán González, A. (2009) Estadística aplicada a la Educación. Madrid: UNED – Pearson.