Skip to main content

Posts

Showing posts from 2014

Confidence Level and Confidence Interval

Being confident make one's self more reassured. Briefly, explanations below are for two sided confidence levels/intervals in order to simplify the idea. Saying " two sided " gives initial impression that there is something like two limits, yeah they are: upper and lower limits where the confidence interval lies in between. Example: Let's look at the population of a specific mobile phone model. Suppose we are now interested in the ' weight ' property. We found that weight property follows a normal distribution with mean value of 120 grams and a standard deviation of 1.4 grams. Weight ~ Normal (Mu, Sigma) = Normal (120, 1.4) This understanding means that majority of mobiles tested will weigh very closely to 120 grams. Yes, there should be fluctuations above and below the mean value but surely that still relatively close to mean value. Suppose a question: do you expect weights like: 121, 119.5, 122.1, 118.9? Answer: Yes , I surely expect such

Conclusions of Hypothesis Testing

A general hypothesis is defined as following (eg a hypothesis on the population mean): H0: Mu = Mu0 H1: Mu !=  Mu0 OK, apart from we have a two or one sided hypothesis, after performing the checking and statistical tests: our conclusion should be one of the following: Rejecting the null hypothesis (H0). Failing to reject the null hypothesis (H0). The following statements for conclusions are not accurate : Accepting the null hypothesis (H0). Accepting the alternative hypothesis (H1). But why? When we fail to reject H0, it does not mean we accept H0 as a fact because we still could not prove it as a fact. But what happened is that we failed to prove it to be false. This goes like following: we have suspected new factors may affected the population mean, then we have taken all possible evidences and checking, but all checking failed to prove our suspects. As well, rejecting H0 does not mean accepting H1 as a fact. What happens in this case is we p

Null and Alternative Hypotheses

Fine... Constructing a statistical hypothesis is mainly to define what's called the null and the alternative hypotheses. Mostly in academic life, students are given the hypothesis to test. But in research or real experiment, constructing the hypotheses correctly is such vital step toward inference or statistical decision. Let's focus on hypotheses for population mean, for simplicity goals... Null hypothesis is mainly our initial information or primary belief about the population. Let's consider the production of 500 ml bottles. The 500 ml is the mean capacity of bottles capacity. Since this is the information we know from previous knowledge, it will be our null hypothesis. We write: H0: Mu = 500 Note: null hypothesis always has equality sign! Test for a hypothesis is usually done when we are worried that some factors affected the population, or population has changed for any possible reasons. OK, so the null hypothesis is the information we know primarily

Understanding the distribution of sample mean (x_bar)

Cool, say now we have a huge population with characteristics ( Mu, Sigma^2 ). When doing a study by sampling, we take a random sample ( size n items ) and then perform the study on the sample and conclude results back for the population. From Central Limit Theorem, we know that the sample mean will always follow a normal distribution  apart from what the population distribution is, such that: x_bar ~ N (Mu, Sigma^2/n) or say: Expected (x_bar) = Mu Variance (x_bar) = Sigma^2/n Well, let's see a simple illustrating example: Suppose we have a population with mean Mu=100 . Now, we have taken a sample, and computed the sample mean, x_bar. We mostly will have x_bar near 100 but not exactly 100. OK, let take another 9 separate samples... suppose these results: First sample --> x_bar = 99.8 Second sample --> x_bar = 100.1 .. .. .. 10th sample --> x_bar = 100.3 What we see that the sample mean is usually close to real population mean, that

The Fact and the Hypothesis

A good fact to submit is that we can't easily know the exact truth values/parameters of a population. Mostly, population parameters also change slightly by time and/or affected by different surrounding factors. Example: a production line for the 500 ml bottles is assumed to produce a population of bottles such that mean value of bottles capacity is exactly 500 ml. Nice, but what happens in realty? In realty, several factors will mostly affect the production: human factors, machine factors, environment temperature...etc. Also, each new bottle will contribute in the population mean value. This means a continuous slight change, either up or down, of the mean capacity. Here comes the hypothesis! As you see, the ground truth value for population mean is difficult to be exactly determined. However, we have general assumptions/expectations. OK, constructing a hypothesis should always be driven by our initial knowledge and expectations about the population. Tes

Standard Normal Distribution, what does Z mean?

You mostly know: the Standard Normal Distribution is the special case of Normal Distribution, given that: Mean: Mu=0.0 Variance: Sigma^2=1.0 Cool, as shown below: the family of normal distributions mainly vary in their mean value and/or their variance. The standard one plays the role of being the reference distribution. Well, we can convert any normal random variable to corresponding interpretations in the standard form. Hence, we simplify different computations only using standard normal distribution. OK, let's assume X is a normal distribution with mean Mu and variance Sigma^2. We can convert to a standard normally distributed random variable by following: Z=(X-Mu)/Sigma Here, we got Z as the standard normal distribution. OK, but what does this mean? Any point in X (with Mu, Sigma ) can be dealt exactly as the converted point in Z with mean Mu=0.0 and Sigma=1.0 . The numerator means: how much is the distance or difference between X and th

Normal Probability Distribution

Also called Gaussian distribution. OK, many things in this world tends, and should do, to be normally distributed. Any distribution is a representation of how the information or data is distributed. We mainly look for its central tendency ( mean ) and variability ( variance ). That's why the normal distribution is usually written as: N ~ (Mu, Sigma^2) For example: the weight of most adult (who still youth) people will normally be centered around some values. Yes, you right there is a diversity: some are slim and some are obese. We may expect the average weight for people (example: ages 20 to 30) to be between 70 to 74 kg. OK, let's consider it as 72 (this is the mean value). Let x represents the weight of a random person. Thus, Expected Value [x] = mean [x] = Mu = 72 kg If we have a sample, we can compute the variance (sigma^2) to indicate variability. But we may here think as following: Variance = Sigma^2 = Expected Value [(x-Mu)^2] Stan

Variability

Variability (as name inspires) show how much values/elements are different/diverse from each others. When all values are very close to the mean value, it means we have little variability or diversity. The common used variability measure is the variance (or standard deviation , which is the positive square root of variance). Besides, the range measure can indicate the amount of variability within a group of data. I guess the formula to compute variability is known, but let's show it again: Suppose we had the following measurements data {5.2, 5.3, 4.95, 5.17, 5.22}. The diversity/variability within values seems small by inspection. Calculating the variance s^2=0.01717 which is clearly a small value. But let's see this group of data {30, 45, 39, 28, 42}. The diversity/variability within values seems large by inspection .  Calculating the variance s^2=55.7 which is clearly a large value.

Central Tendency

Any population is usually best described by its central tendency and variability measures. Well, these parameters should be used to best interpreting for information on a specific property under study. Central tendency measures the value that mostly whole data/elements are centered around or grouped at. Which means, data values tend to be like/close to this value. The most common methodology the describe the central tendency is the mean value, also called the expected value , or average value (usually for samples). There still more methods to indicate central tendency such as mode and median . The average (or mean ) of a sample of data is just the summation divided by the data count. Whereas, the mode is the most repeated/frequent value within our data. Also the median represent the middle ordered value when sorting the data group in ascending manner. Example: the water (or juice) bottles production is a population where we can think of the bottle capaci

Sample size and measurements accuracy

Seems obvious that larger sample size will give more close results to what happens in realty. Cool, if it's easy to select large elements to represent the sample, then we should do. Usually a sample of size 30 to 40 elements is great to have. We should increase the sample size whenever we: Can easily handle/select sample items (randomly). Perform the measurement with little effort/time consumption. Notice these are highly dependent on the problem or population nature and situations. Another point to keep is to have accurate measurements for the sample as possible. In conclusion, two important factors should be regarded: Good sample size as large as possible. This guarantees more information diversity is included in our sample. Thus it's more close to describe the whole population facts.  Using accurate measuring tools/equipment when measuring the required property of the sample because these measurements will judge next steps. This requirement becomes more

How to be "random" in sample selection?

Well, no fixed criteria in statistics. Yes, there exist many theories and methodologies to create random sequences or numbers. However, this will depend on the population nature, situations and surrounding environment. In general, we may think like following: Many persons/machines to share in sample selection is better than to be done by only one. Diversity leads to better randomness. Selecting in different times/situations/places is better than to do at once in order to get more randomness. Changing methodologies/media of selection may help. Combining two or more random samples create better random sample. Anyway, since our goal is to get accurate inferences, we should try as possible to be randomized in sample selection.

The "Sample"

Anytime you aim to perform a study on the entire population, you will surely find that this task will be: Much time and/or efforts consuming as populations are normally huge . Impossible if the population is infinite (such as products). Here comes the role of taking samples. Yes! we just take a sample from the whole population, perform the study on the chosen sample, apply the results back to our population. This is the core of  inferential statistics because what we do is to infer parameters/properties of the population using information from a small sample. Well, this does not mean we will obtain 100% exact accurate estimations or inferences. But to be as close as possible, sample elements should be taken randomly ! At least, being random in sample selection will mostly include the diversity of information/facts within our population.

Population examples

Whenever a university want to do a study on their entire students, thus the entire students are the meant population. That does not depend on what we are going to study: students' ages, disciplines trends, performance...etc In industry, a company that produces pens of a specific type would like usually to re-evaluate the production. Here, their population is the entire production of pens which is obvious to be infinite. Their study may aim to examine major properties such as: length, shape, weight, life period...etc OK, let's look for environmental example. The ministry of environment in some city used to check the pollutant concentration in a runny river. Based on the results they will advise people either to use or not the water of the river, eg for swimming, washing or irrigating. The whole water amount will be their population here that should be studied.

The "Population"

We always used to say "population" to mean the whole count of people in a country. Yeh, for example the population of Singapore is in the range of 5 to 6 millions. Cool, what happens in statistics that we also call the whole count or entire amount of things under study as a "population". Example: a company that produces pens, here: the whole amount of products "pens" that was produced, and to be produced in future is considered as our "population". You see? yep a "population" is usually a large count/amount of items. It could be infinite as well. So, can you think of several examples as "populations" ?

Welcome to test of hypothesis

Hello All This will be the blog to understand statistical hypotheses testing! Course on Udemy:  https://www.udemy.com/test-of-hypotheses/?couponCode=code-0-50 Now $0.0 (limited promotion) Thanks