Important Questions for IGNOU MAPC MPC006 Exam with Main Points for Answer - Block 1 Unit 1 Parametric and Nonparametric Statistics
Have you prepared these very important questions from Block 1 Unit 1 for IGNOU MAPC MPC006 Exam? Don't miss this chance to score good marks - get started!
Block 1 Unit 1 Parametric and Nonparametric Statistics
1) Define parametric statistics.
Parametric statistical tests, like the t-test and F-test, are used to analyse data that meet certain conditions. These tests are considered more powerful for determining the significance of sample statistics but rely on assumptions about the population and measurement scale used. One key assumption is that the data comes from a normally distributed population. This means the frequency distribution of the data follows a bell-shaped curve that is symmetrical and infinite at both ends. Additionally, parametric tests assume that variables are measured on an interval or ratio scale. Interval scales have equal distances between values, while ratio scales also have a true zero point. If these assumptions are not met, the results of the parametric tests may not be reliable.
2) Discuss non-parametric statistics?
Non-parametric statistics are a class of statistical methods that do not require the data to fit any particular distribution or make stringent assumptions about the population. They are also known as "distribution-free" tests. Unlike parametric tests, which usually analyse data expressed in absolute numbers or values, non-parametric tests often work with data that are ranked or grouped. For example, the Spearman's rank correlation coefficient and the Chi-square test are non-parametric tests.
3) Write various assumptions of parametric statistics? / List the assumptions on which the use of Parametric Tests is based.
Parametric tests like, ‘t and f’ tests may be used for analysing the data which satisfy the following conditions :
- The population from which the sample have been drawn should be normally distributed.
- Normal Distributions refer to Frequency distribution following a normal curve, which is infinite at both the ends.
- The variables involved must have been measured interval or ratio scale.
- The observation must be independent. The inclusion or exclusion of any case in the sample should not unduly affect the results of study. The observations are independent. The selection of one case in the sample is not dependent upon the selection of any other case.
- These populations must have the same variance or, in special cases, must have a known ratio of variance. This we call homosedasticity.
- The samples have equal or nearly equal variances. This condition is known as equality or homogeneity of variances and is particularly important to determine when the samples are small.
4) What are the advantages of non-parametric statistics?
- If the sample size is very small, there may be no alternative except to use a nonparametric statistical test.
- Non-parametric tests typically make fewer assumptions about the data and may be relevant to a particular situation.
- The hypothesis tested by the non-parametric test may be more appropriate for research investigation.
- Non-parametric statistical tests are available to analyse data which are inherently in ranks as well as data whose seemingly numerical scores have the strength of ranks.
- For example, in studying a variable such as anxiety, we may be able to state that subject A is more anxious than subject B without knowing at all exactly how much more anxious A is. Thus if the data are inherently in ranks, or even if they can be categorised only as plus or minus (more or less, better or worse), they can be treated by non-parametric methods.
- Non-parametric methods are available to treat data which are simply classificatory and categorical, i.e., are measured in nominal scale.
- Samples made up of observations from several different populations at times cannot be handled by Parametric tests.
- Non-parametric statistical tests typically are much easier to learn and to apply than are parametric tests. In addition, their interpretation often is more direct than the interpretation of parametric tests.
5) Differentiate between parametric and non-parametric statistics?
Here are some key differences between parametric and non-parametric statistics:
- Assumptions: Parametric tests make more assumptions about the population distribution and data measurement scale than non-parametric tests. Non-parametric tests are more flexible and make fewer assumptions, making them applicable to a wider range of data types.
- Data Type: Parametric tests typically require interval or ratio data, while non-parametric tests can be used with nominal, ordinal, interval, or ratio data.
- Sample Size: Non-parametric tests are often suitable for small sample sizes, where the assumption of normality may not hold. Parametric tests generally perform better with larger sample sizes.
- Power: Parametric tests are more powerful for detecting significant effects when their assumptions are met. This means they have a higher probability of correctly rejecting a false null hypothesis. However, when the assumptions of parametric tests are not met, non-parametric tests can be equally or more powerful.
- Interpretation: The interpretation of parametric tests often involves estimating population parameters, like the mean or standard deviation. Non-parametric tests often focus on the rank ordering or frequencies of data, making their interpretations more focused on the relationships or differences between groups.
6) Describe the characteristics of Central Limit Theorem.
The Central Limit Theorem (CLT) is a fundamental concept in statistics that describes the behaviour of sample means, particularly when the sample size is large. Here are the key characteristics of the CLT as outlined in the source:
- The distribution of sample means will be normally distributed, even if the original population is not normally distributed. This means that if we repeatedly draw samples of equal size from any population and calculate the mean of each sample, the frequency distribution of these means will tend to follow a bell-shaped curve.
- The average value of the sample means (the mean of the sampling distribution) will be the same as the mean of the population. This implies that the sample mean is a reliable estimator of the population mean, especially as the sample size increases.
- The distribution of sample means will have its own standard deviation, known as the standard error of the mean (SEM). The SEM quantifies the variability of sample means and indicates how much the sample means are likely to deviate from the population mean. The formula for the SEM in large samples (N > 30) is SEM = σ / √N, where σ is the standard deviation of the population and N is the sample size.
7) Define the standard error of mean.
The distribution of sample means will have its own standard deviation. This standard deviation is known as the ‘standard error of the mean’ which is denoted as SEM or óM.
It gives us a clue as to how far such sample means may be expected to deviate from the population mean. The standard error of a mean tells us how large the errors are in any particular sampling situation.
The formula for the standard error of the mean in a large sample is:
SEM or óM = ó / sqrt (N)
Where: ó = the standard deviation of the population and N = the size of the sample
Start the discussion!