Skip to main content

Variability

Variability (as name inspires) show how much values/elements are different/diverse from each others. When all values are very close to the mean value, it means we have little variability or diversity.

The common used variability measure is the variance (or standard deviation, which is the positive square root of variance). Besides, the range measure can indicate the amount of variability within a group of data.


I guess the formula to compute variability is known, but let's show it again:


Suppose we had the following measurements data {5.2, 5.3, 4.95, 5.17, 5.22}.
The diversity/variability within values seems small by inspection. Calculating the variance s^2=0.01717 which is clearly a small value.

But let's see this group of data {30, 45, 39, 28, 42}.
The diversity/variability within values seems large by inspectionCalculating the variance s^2=55.7 which is clearly a large value.

Comments

Popular posts from this blog

The "Sample"

Anytime you aim to perform a study on the entire population, you will surely find that this task will be: Much time and/or efforts consuming as populations are normally huge . Impossible if the population is infinite (such as products). Here comes the role of taking samples. Yes! we just take a sample from the whole population, perform the study on the chosen sample, apply the results back to our population. This is the core of  inferential statistics because what we do is to infer parameters/properties of the population using information from a small sample. Well, this does not mean we will obtain 100% exact accurate estimations or inferences. But to be as close as possible, sample elements should be taken randomly ! At least, being random in sample selection will mostly include the diversity of information/facts within our population.

Conclusions of Hypothesis Testing

A general hypothesis is defined as following (eg a hypothesis on the population mean): H0: Mu = Mu0 H1: Mu !=  Mu0 OK, apart from we have a two or one sided hypothesis, after performing the checking and statistical tests: our conclusion should be one of the following: Rejecting the null hypothesis (H0). Failing to reject the null hypothesis (H0). The following statements for conclusions are not accurate : Accepting the null hypothesis (H0). Accepting the alternative hypothesis (H1). But why? When we fail to reject H0, it does not mean we accept H0 as a fact because we still could not prove it as a fact. But what happened is that we failed to prove it to be false. This goes like following: we have suspected new factors may affected the population mean, then we have taken all possible evidences and checking, but all checking failed to prove our suspects. As well, rejecting H0 does not mean accepting H1 as a fact. What happens in this case is we p

Confidence Level and Confidence Interval

Being confident make one's self more reassured. Briefly, explanations below are for two sided confidence levels/intervals in order to simplify the idea. Saying " two sided " gives initial impression that there is something like two limits, yeah they are: upper and lower limits where the confidence interval lies in between. Example: Let's look at the population of a specific mobile phone model. Suppose we are now interested in the ' weight ' property. We found that weight property follows a normal distribution with mean value of 120 grams and a standard deviation of 1.4 grams. Weight ~ Normal (Mu, Sigma) = Normal (120, 1.4) This understanding means that majority of mobiles tested will weigh very closely to 120 grams. Yes, there should be fluctuations above and below the mean value but surely that still relatively close to mean value. Suppose a question: do you expect weights like: 121, 119.5, 122.1, 118.9? Answer: Yes , I surely expect such