Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Loading video...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Statistics
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Bookmark and Share
Lecture Comments (1)

0 answers

Post by Windesson Almeida on August 5, 2012

nice..pretty teacher too

Type I and Type II Errors

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Roadmap 0:18
    • Roadmap
  • Errors and Relationship to HT and the Sample Statistic? 1:11
    • Errors and Relationship to HT and the Sample Statistic?
  • Instead of a Box…Distributions! 7:00
    • One Sample t-test: Friends on Facebook
    • Two Sample t-test: Friends on Facebook
  • Usually, Lots of Overlap between Null and Alternative Distributions 16:59
    • Overlap between Null and Alternative Distributions
  • How Distributions and 'Box' Fit Together 22:45
    • How Distributions and 'Box' Fit Together
  • Example 1: Types of Errors 25:54
  • Example 2: Types of Errors 27:30
  • Example 3: What is the Danger of the Type I Error? 29:38

Transcription: Type I and Type II Errors

Hi and welcome to www.educator.com.0000

Today we are going to talk more in-depth about type 1 and type 2 errors.0001

If you want to know more about power and effect size it is good to go through this lesson 0006

because it is going to help you understand some of the pictures that we are going to draw in the future. 0013

Here is the roadmap for today.0017

We need to know about these type 1 and type 2 errors, but we also need to know when we make those errors in relationship to hypothesis testing.0021

So far we only used t test as our hypothesis test.0033

We have shown these errors and their relationship to hypothesis testing before as a box, but frequently in hypothesis testing we draw distributions.0037

The SDOM to be more specific.0048

What I want to show you how the errors fit on this distribution picture.0051

We are going to show you how the box and the distributions fit together because these two things actually relationship to each other. 0058

They refer to the same concept. 0065

There are just 2 different ways of showing you that same concept.0067

We go through hypothesis testing, but in the real world there is some reality that either the null hypotheses is just true or the null hypothesis is false.0071

Although we do not know this reality, all we know is the result of our hypothesis testing.0086

There are two kinds of ways we can make errors.0092

We can make an incorrect decision by false alarming.0095

We reject the null, but we should not have rejected the null.0099

That is called the false alarm or a type 1 error. 0106

I used to get confused between which one is type 1 and type 2, these are arbitrate. 0110

I like to think of this as the more serious error when you successfully reject the null hypothesis that is a more extreme thing that you do.0116

This is actually more dangerous than this miss.0127

That is not much of an error but actually false alarming.0131

That is how I remember the number 1 error you should look out for.0136

The type 1 error is often also called the likelihood of false alarming.0142

The probability of false alarming and that is referred to as alpha.0151

If the reality that we do not know is that this null hypothesis is true we have a probability of false alarming with the rate of alpha.0157

We have the probability of failing to reject when we should have rejected, a correct failure your probability is 1-alpha.0171

These two things add up to 1.0186

The probability of false alarming + the probability of making a correct failure =1.0189

On the flipside, let us say that null hypothesis is false that is not a true picture or model of the world. 0199

Then we really should have reject it.0210

It is not true, we should reject it, that would be a correct decision and that is called the hit where we are rejecting the null when we should have rejected it.0213

That gives us the probability of hits.0226

We could be incorrect and fail to reject when we should have rejected that is also another incorrect decision.0230

That is the type 2 error.0242

It is a miss and the probability of miss is given as beta.0244

Beta + 1 –beta = 1. 0248

The probability of misses + probability of hits =1.0254

In which of these boxes is the sample statistic statistically significant?0262

In which of these boxes is our p value less than .05 or whatever our alpha level is.0274

Let us think about that.0280

When we reject the null hypothesis that means our test statistic in this case t is extreme.0282

Our p value is significant and remember we mean significant as it stands out. 0294

It is very weird. 0304

In this case, these two quadrants up here is what we should worry about.0307

This is the decision we need to worry about when we reject the null hypothesis.0315

The other possibility is that when we reject the null hypotheses and our p is significant we made a correct decision.0322

These are our two choices if we know that p is less than alpha or if our test statistic is extreme.0334

Here p is not significant. 0343

It is not too weird and because of that we will fail to reject and we can be correct in failing to reject 0349

or when we fail to reject we could be wrong by making a type 2 error.0358

Here is what I want you to know. 0363

Let us say we carry out hypothesis testing and I think I have a really low p value.0365

I am going to reject my null hypotheses.0372

Which error am I likely to make, a false alarm or a missed?0375

Since I rejected my null, the only error I can possibly make is this one where I reject the null and get wrong.0381

Let us say I go through my hypothesis testing and I get p=.4.0397

Let us say I do not reject my null.0405

What mistake or what error could I have possibly made?0408

The only error I can make is the missed error.0411

Here I fail to reject and I could be wrong in doing it.0414

Let us talk about distributions and how errors fit in here.0418

We have a one sample t test we set up some null population.0426

This is our null hypotheses population and our hypothesized mu might be 230.0431

We do not know whether our sample is part of this or it is part of some other population, not the null population.0443

We can hypothesize maybe it comes from some other population like this one.0454

When we set our alpha levels and create critical t and zones of rejection and all of that stuff what we are doing is recreating the line.0459

If our sample t is outside here then we are going to reject the null.0476

So far we have only colored in this part, but we really mean this part as well as all of this part.0492

That is our reject the null zone, this entire area. 0508

In order to find out whether we should reject the null or not we also need to look past the raw score.0514

We need to look past the raw score and we need to look at it in terms of the critical t.0528

The critical t might be whatever like -2. Something .0536

We need to find out this t value and so I am just going to make one.0544

Let us say this t value is 5.5 and if our t value is sufficiently extreme then we reject are null hypothesis.0549

This would be our critical t and this is our sample x bar, but this is our sample t.0560

And that is how it looks out here.0574

Our possibility of making an error is this little gray spot that I have colored in red.0577

Just in case my sample really does come from these areas, I should not have rejected the null.0587

If it happens by chance rule 50 heads in a row it is very unlikely but it is still possible.0596

It is still possible that I got this x bar even though this is the true population distribution.0613

This is my possibility of making a type 1 error.0621

We actually have to add this side up to this side type 1 error.0628

We know that this is alpha=.05.0640

This part is 1 – alpha which is .95 and that is our possibility of not rejecting given that the null hypothesis is true.0646

That is the example of one sample hypothesis testing.0660

This is the same picture as before.0666

I just written it more neatly for you by typing it out and you can think of this test statistic as just t.0669

I have just written the generic word test statistic to think of this as critical t and sample t.0676

Here is the important thing to realize.0682

This gray distribution here represents an SDOM and that is why this is mu sub x bar and there is also an x bar here as a sample. 0685

This SDOM actually represents the probability where the null hypothesis is true and that probability equal to 1.0696

Remember we talked about that before when we said the area underneath the normal distribution equal to 1.0706

This represents the possibility that this may not be true and that there exists some other population that our sample really came from.0713

We do not just know what that population is.0727

That is the probability that the null hypothesis is false.0731

That normal distribution also has an area =1.0736

What we can additionally find out is when we create the zones of rejection and we say anything outside of this critical t reject it.0744

We color in this area here.0759

What we are saying is this is the probability of rejecting given that the null hypothesis is true. 0761

This is the area where we fail to reject.0777

This probability right here represents the conditional probability of failing to reject given that h knot or null hypothesis is true.0784

And that equals 1 – alpha because this one equal alpha.0807

Those are the important things to remember. 0815

These are all conditional probabilities as we learned about previously in probability lessons.0819

Let us talk about a two sample t test.0826

The idea behind the two sample t test is almost exactly the same except there are just a couple of exceptions now.0830

Instead of a raw score we have difference of scores and we still have a test statistic.0838

Here our mean hypothesized difference between our non college sample and our college sample is going to be 0 because that means they are the same.0846

Remember, these are SDOD (Sampling distributions of differences of means).0862

This is 0 and this might be our actual sample difference x bar – y bar, the actual difference between the samples.0875

Same thing down here, we have this as our critical test statistic and this is our sample t.0887

We want to know whether our sample t is way far out, more extreme than our critical t.0902

Here this represents the probability that the null hypothesis, that there is no difference is true and that is =1.0910

Same thing here, the probability that the null hypothesis is false and actually there some other distribution we just do not know what that is.0923

We will draw it like a ghost with blue.0933

It is important to know that this mu is mu sub x bar - y bar because we are talking about SDOD.0936

That is why it is a difference of means. 0946

Once we know this, now what we need to do is figure out what these probabilities mean. 0950

Here, let me draw the cut off again, here we have our rejection zone and our fail to reject zone.0958

Once again we can find those conditional probabilities. 0977

What is the probability of rejecting given this thing is true, inside of this space where the null hypothesis is true?0981

What is the probability of failing to reject given that the null hypothesis is true?0992

That is the conditions that we are working under.0999

It is still the same. 1005

Here we see alpha and here we see 1 – alpha.1008

Ideally when we have these differences between distributions what we really would like is that 1018

there was very little overlap between these two distributions. 1027

The null distribution and the like real one that we do not know anything about.1031

It will be nice if there was very little overlap.1036

But in real life, there is usually a lot of overlap.1038

The real world is noisy and the real population might be very, very different. 1043

The real population might be very similar to the null population.1055

If that is the case, there is some overlap between their distributions.1071

There are some chances that we might get a score over here and it could be part of the real population or part of the null population.1077

If this is the case and we need to understand these conditional probabilities in anyway.1086

Get ready here is the deal. 1098

Instead of writing real population, I am going to say not null population because we do not know what it is.1100

It is just not the null population.1112

I am going to take this picture, this great curve and I will draw up here in two ways.1115

I am going to split it up into two parts. 1121

One part is going to be this blue part, this fail to reject region and that is that whole part.1123

Here I am also going to draw the red part.1144

I just draw it separated from each other so that you can see.1147

Here we have this little part and that is red and it is red because we have rejected it.1158

This is the case where we are actually wrong.1167

This is the case where we are actually right.1171

Here we are wrong because we rejected the null hypotheses that we should not have rejected.1174

Here we are correct, because we fail to reject and truly we should not have rejected it.1180

Now that is the case if the null hypothesis population is true.1185

What happens in a case where it is not true?1193

The null hypothesis is false.1200

What happens here?1203

Here I am going to draw a different looking picture because I'm going to draw this curve but this curve split up.1206

Here I am going to split this curve up like this. 1218

On this side of the line I am going to draw this little section and draw just this little section.1227

That part of it I have failed to reject.1253

That is wrong so I am going to color it in red because we should have rejected it but we fail to reject it.1257

On the other side, I am going to try the other part of this curve.1274

It is this part and here I am going to color that in blue because although we rejected it we should have rejected it.1279

Here we rejected the null hypothesis and you are right we should have rejected it because we are in this new unknown population.1292

You should have rejected it.1308

Let us look at the places where we are correct.1310

We are correct here and this is called a correct failure.1314

Here we are also correct and this is called a hit.1319

Here we are incorrect and that is called a false alarm.1331

Here we are also incorrect and this is called a miss. 1344

It is a miss because we have failed to reject it.1352

We failed to hit the target when we should have hit the target.1357

Given that, let us see how the distributions and the box go together.1361

The false alarm is really that place.1369

Remember when the hypothesis is true I am going to draw it in black.1373

The correct decision is going to be this whole section where we fail to reject, but that is okay we are in this fail to reject zone.1378

You are good to go. 1393

Here is the other part of this part.1395

Here this is an error because we have rejected when we should not have rejected because it is actually true.1401

This is our false alarm. 1416

Now, in the case of a correct decision where you actually hit it, this means you rejected it and it is good 1418

that you rejected it because actually a different population is true, not this null population.1430

That is going to be the area where you reject, all rejections are going one on the right side of this line.1438

You should have rejected it because you are in a different population.1454

You are not in the null population.1461

This is a good thing for you.1462

You should have rejected it.1466

The other part of that, the other piece of that is down here.1469

It is this little piece down here.1474

Here it is incorrect, because although you are part of a different population, not the null population, you did not reject it.1477

You fail to reject.1491

I want you to notice something here.1493

All the fail to reject are always on this side of the line because these are values that are less extreme than the mean.1500

And the rejection ones are all in this side of the line. 1508

I could also drawn it two tailed and also showing you the side but I'm showing you one tailed.1511

It is all outside of the line, on the outer boundaries of this line, more extreme than the hypothesized mean.1517

This is less extreme than the hypothesized mean.1524

My hypothesized mean is somewhere here, less extreme than that.1527

It is relative to the hypothesized mean.1533

That is how these four pictures fit together.1539

When you see those two distributions drawn, do not get confused you already know it. 1543

You just have to break it apart in slightly different ways.1548

Let us go on to some examples.1553

On the basis of results from a large sample of students from a university, a professor reports the mean high from my sample is not significantly below 60.1556

That means he did not reject.1573

This is fail to reject.1576

If he said significantly that would be rejecting the null.1581

Which type of error will this professor worry about?1586

He failed to reject, that is important to know.1590

What is the only error you can make if you fail to reject?1593

Well if you fail to reject, but you should have rejected it, the null hypotheses is false, what kind of error is that?1596

That is a missed and a type 2 error.1617

The error rates are given by alpha and beta and this is actually beta so these are wrong. 1624

These are both correct rates instead of the error rates and this is nonsense having a non significant results are all error statistically.1631

It is never the case.1642

You are damned if you do and damned if you do not.1643

There is always a way you can make an error either type 1 or type 2.1645

Example 2, a researcher worries about trying incorrect conclusion.1649

The researcher plan to select a sample of size 20 and to use the .01 level of significance.1655

Here alpha is .01.1662

In a two tailed test of the null hypothesis the critical t should be + or - because it is a two tailed test.1664

It is + or -2.86. 1676

If he obtains the t of 2.8 which type of error would he be worried about and why?1681

Well, you definitely know that he is not going to reject.1695

Fail to reject because this is less extreme than this.1704

This is less extreme so he fail to reject.1717

The only error you can have when you fail to reject is if you fail to reject given the null hypothesis is false.1722

What kind of error is that?1729

That is a missed or type 2.1733

What if he obtains a t of 2.869 which type of error would he be worried about?1744

That is more extreme than this.1752

In this case he would reject the null.1754

When is he wrong when he rejects?1757

When he should have not rejected it because the null hypothesis is actually true.1760

What kind of error is that?1765

That is a false alarm or type 1 error.1768

Example 3, what is the danger of the type 1 error?1776

This is a more conceptual question. 1782

The danger is mistakenly concluding that there is no significant difference between the obtained mean and the hypothetical population mean. 1785

When you make a type 1 error you have rejected the null but null hypothesis is true.1794

Mistakenly concluding that there is no significant difference but that is not true 1808

because you concluded that there is a significant difference that is why you rejected the null.1814

Mistakenly concluding that there is a significant difference between the obtained mean and the hypothetical population mean.1818

That is true.1826

You mistakenly rejected the null and said there is a significant difference but you should not have done that.1829

Mistakenly being alarmed about a hypothesis when you should become.1838

That is non sense.1843

Mistakenly calculating the wrong test score.1844

These errors are not errors that you can actually avoid.1847

These are not errors because we were sloppy. 1851

These are errors that are made because we do not know the real nature of the world. 1854

This is actually not what we are talking about when we are talking about type 1 or 2 errors.1860

Mistakenly choosing the wrong population standard deviation to calculate standard error, that is not it either.1865

These two are just regular old mistakes or errors in calculation.1872

They are not type 1 and 2 errors of hypothesis testing.1878

That is it for type 1 and 2 errors.1881

Thank you for using www.educator.com.1885