Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Loading video...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Statistics
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Bookmark and Share

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Sign up for Educator.com

Membership Overview

  • Unlimited access to our entire library of courses.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lesson files for programming and software training practice.
  • Track your course viewing progress.
  • Download lecture slides for taking notes.
  • Learn at your own pace... anytime, anywhere!

Hypothesis Testing for the Difference of Two Independent Means

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Roadmap 0:06
    • Roadmap
  • The Goal of Hypothesis Testing 0:56
    • One Sample and Two Samples
  • Sampling Distribution of the Difference between Two Means (SDoD) 3:42
    • Sampling Distribution of the Difference between Two Means (SDoD)
  • Rules of the SDoD (Similar to CLT!) 6:46
    • Shape
    • Mean for the Null Hypothesis
    • Standard Error for Independent Samples (When Variance is Homogenous)
    • Standard Error for Independent Samples (When Variance is not Homogenous)
  • Same Conditions for HT as for CI 10:08
    • Three Conditions
  • Steps of Hypothesis Testing 11:04
    • Steps of Hypothesis Testing
  • Formulas that Go with Steps of Hypothesis Testing 13:21
    • Step 1
    • Step 2
    • Step 3
    • Step 4
  • Example 1: Hypothesis Testing for the Difference of Two Independent Means 18:47
  • Example 2: Hypothesis Testing for the Difference of Two Independent Means 33:55
  • Example 3: Hypothesis Testing for the Difference of Two Independent Means 44:22

Transcription: Hypothesis Testing for the Difference of Two Independent Means

Hi and welcome to www.educator.com.0000

We are going to be talking about hypothesis testing for the difference between two independent means.0001

We are going to go over the goal of hypothesis testing in general.0005

We have only looked at it for one means so far, but we are going to look at 0012

how it changes just very suddenly when we talk about two means.0015

We are going to re-talk about the sampling distribution of the difference between two means.0019

You have just watched the confidence interval for two means, then you do not need to watch this one.0025

You do not need to watch that section.0032

We are going to talk about the same conditions for doing hypothesis testing as first confidence interval.0034

They need to meet three conditions before you could do either of these two.0043

When we talk about the modified steps of hypothesis testing for two means and the formulas that go with those steps.0047

Let us talk about the goal of hypothesis testing.0055

In one sample what we wanted to do was reject the null if 0060

we got a sample that was significantly different from the hypothesized mu.0065

For instance, significantly lower or significantly higher.0073

A significant does not mean important like it does in our modern use of the word. 0076

It actually means does it standout?0083

Is it weird enough?0086

Does it stand out from the hypothesized mu?0088

In those cases we reject the null.0091

Our goal is to reject the null. 0095

We can only say whether something is sufficiently weird w cannot say whether it is sufficiently similar.0097

Experiment is actually a success if they reject the null.0106

If they do not reject the null it is considered a null experiment or what we think of as uninformative which is not actually true. 0110

That is how traditionally is that.0118

This is the case where we only have one sample and we have a hypothesized population. 0123

Here we have two samples and in order to reject the null we need to get samples that are significantly different from each other.0130

They stand out from each other so x is different from y, y is different from x.0144

That is what we are really looking for.0151

Once again, just like the one sample, we cannot say whether they are sufficiently similar, 0154

but we can say whether they are sufficiently different.0159

It is okay if x is significantly lower than y or significantly higher.0163

We do not really care.0170

We just care about significantly different.0171

If you do not care about which direction these are called two-tailed hypotheses. 0173

Let us think if x and y are different from each other then x - y should not be 0. 0179

But if x and y are exactly the same, x = y then x – y =0.0189

Because you can think about this as x – x because x – y.0196

If you want to think about it algebraically even if you add y to each side you would get perfectly x= y.0201

If x and y were the same, we should expect their difference to be 0.0211

Let us just review very briefly the sampling distribution of the difference between two means.0218

This is the case where we do not know what the population is like, 0228

but because of the CLT we actually end up knowing quite a bit about the SDOM.0233

This is x the population of x and population of y.0242

This is the SDOM of x bar, so the whole bunch of x bars and this is the SDOM for y which is a whole bunch of y bars.0247

We know some things about these guys and we also know we can figure out the standard error from the sample.0258

What is nice about this if we do not need to know anything about the population. 0280

All we have to do is know the standard deviation of the sample which we could easily calculate 0284

in order to estimate the standard error of these two populations. 0288

Once we have that now we can start talking about the SDOD (the sampling distribution of the difference between means).0294

What we want to do is instead of finding mu sub x or mu sub y, we want to know mu sub x bar – y bar.0306

Here you have to think of pulling out one sample from here and one sample from here getting the difference and plotting it.0322

If these guys are normal, we can assume this one to be normal.0332

Not only that but we can figure out the standard error of this guy as well just 0336

from knowing these because the standard error is going to be square roots of s sub x2.0342

The variance of s/n sub x + variance of y/ n sub y.0357

These are all things that we have.0366

We do not need anything special.0368

We do not need sigma or anything like that.0370

We just need samples in order to calculate this.0372

If these two distributions or if these two distributions, the population distribution, 0374

if we have a reason to suspect that these have homogeneous variance.0384

If their variances are the same then instead of s sub s2 and s sub y2, 0389

we can actually use spull2 but we would not be doing that in this lesson, but you can.0395

Remember the rules of the SDOD are very similar to the CLT and if the SDOM for x is normal 0405

and SDOM for y is normal then SDOD is normal too.0415

There is two ways that this could be true. 0419

The first way is if populations are normal.0421

If population of x and y are normal then we could assume SDOM for x and y are normal.0428

Or are your other possibility is if n is large enough.0435

We want to talk about the mean for the null hypothesis.0443

The null hypotheses is saying that the population of x and population of y, 0450

the difference between them is going to be 0 because they are similar.0457

The null hypotheses is saying both are similar, which means that the means of 0461

the sampling distribution of the means, the SDOM means is going to be similar.0467

Which means that is strap in and will give us 0.0474

The null hypothesis says the mean of these differences of means it is going to be 0.0478

That is the null hypotheses and that is really saying that the SDOM for x and SDOM for the y are very similar.0486

Let us talk about standard error for independent samples.0497

Remember, we are still talking just about independent samples.0502

When variance is homogenous that is only used as Spull idea.0506

That means that x sub x bar - y bar is going to be equal to and pretend you are 0511

writing just the regular idea where you are dividing by n sub x and n sub y.0521

Instead of using the variance from x and the variance from y, we are going to use that pulled variance idea.0529

That is going to be s pulled. 0536

Some people think why do we just put that on top and put n sub x and n sub y at the bottom?0547

That will be algebraically wrong because remember, these are the denominators we would have 0554

to have common denominators in order for us to put these together and we do not have common denominators yet.0559

What about in the case where variance is not homogenous and this is the vast majority of time and when in doubt, 0565

when you do not know anything about the variance of the population go with this one. 0576

It is just a safer option. 0582

This is going to mean that this standard error is represented by the variance of x /n + variance of y /n.0584

Add these together and square the whole thing.0602

Just to recap, same conditions must be met in order to do hypothesis testing 0605

for two means as the conditions for doing a confidence interval for two means.0616

It is that the two samples were randomly and independently selected from two different populations, 0622

it is reasonable to assume that both populations that the sample come from are 0632

normally distributed or the sample sizes are sufficiently large. 0636

This was to ensure the normality of the SDOM.0641

Also in the case of the sample surveys, the population size should be at least 10 times larger than the sample size for each sample.0643

That is just assume so that we could assume replacement because probability actions change when you do not assume replacement.0651

Let us go in the steps of the hypothesis testing.0663

These are the same steps as you did when you have one mean, except now that we are subtly changing a few things.0669

I'm going to highlight those changes as we go through this.0677

First we need to state our hypotheses and remember now instead of having just the hypotheses that 0679

the mean of the population equals this, what we are saying is that the mean of x,0686

population of x and the mean of the population of y those are the same. 0696

Mu sub x - y will be 0.0701

You can also write it as mu sub x = mu sub y.0707

The alternative is that they are different from each other in some way.0712

Then we pick a significance level. 0718

How different do these two populations have to be for us to say they are different?0721

We set a decision stage, but instead of drawing the SDOM now we draw the SDOD.0726

Because now we are looking at the differences between these to means. 0734

We identify critical limits and rejection regions. 0739

We also find the critical test statistic, the boundaries.0743

In order to do this we have to find the degrees of freedom for the difference.0747

We cannot just use the degrees of freedom for 1, degrees of freedom for the other but we actually add them together.0753

And then use the samples and the SDOD to compute the mean difference.0759

We are not just computing mean, but we are computing mean difference test statistics, as well as the p value.0764

And then we compare the sample to the hypothesized population.0773

We either reject the null or not.0779

We reject the null if our test statistic and p value lie in those zones of rejection.0781

It is like these are the weirdo zone.0792

This is all we know that our sample is really different from this population. 0794

Let us talk about the different formulas that go along with these steps.0799

Remember the first step is going to be, what is the hypothesis, the null hypotheses, as well as the alternative.0806

This is not really a formula, but it is helpful to remember that this is what we really mean versus x bar – y bar does not equal 0. 0817

This is often what is going to be the case and you can rewrite this as mu sub x bar – mu sub y bar sometimes, 0836

but there are some mathematical ideas that you have to learn before you can write that.0846

I will leave that aside for now. 0857

Second thing is significance level.0859

Here there are no formulas but you should know that when we say alpha= .05 we are talking about that false alarm rate.0862

This is the rate of rejecting the null when the null is actually true.0873

This is a very low rate of false alarms.0877

When we say alpha = .05 it is not that we calculated it but it is just that 0881

by convention science tends to say this is the reasonable level of significance.0887

Sometimes people are more conservative than 1.0 or 1.001.0895

Number 3, we need to set that decision stage.0900

It is helpful to draw the SDOD and it is helpful to have our hypothesized population here. 0905

Mu sub x bay – y bar = 0.0924

We assume that this point is 0.0930

One thing you probably also want to know about the SDOD is the formula for standard error. 0932

The formula for standard error of the SDOD we written this a lot of times, 0941

is the variance of x / n sub x + the variance of y / n sub y.0951

Another thing, you probably want to know is that we need to find these critical t.0959

We need to find the t values here and in order to find that you will need to know 0965

the degrees of freedom for the difference and it is pretty easy. 0973

It is the degrees of freedom for x + the degrees of freedom for y.0979

To find this, it is n sub x -1.0983

To find that it is n sub y -1.0988

We could write this as n sub x -1 + n sub y -1.0990

You could write it like that and then I think that is all you need to know for the decision stage.1002

Step 4, if you have to compute the samples mean difference you need to calculate its test statistic as well as its p value. 1011

Remember we are going to be using t from here on out because obviously we are using s instead of sigma.1039

Let us talk about how to come to the sample t.1046

Let me write this as sample t.1050

The sample t is really the distance between where our sample differences versus the hypothesized difference.1058

We do not want it just in terms of that raw distance, we want in terms of the standard error.1069

It is going to be whatever our x bar - y bar is the actual sample difference -0.1075

That is our hypothesized population divided by the standard error s sub x bar – y bar.1085

That will give you how many standard errors away our actual mean difference is from 0.1097

Once you have this t value and you have the degrees of freedom, 1104

then you can find the p value and then you could reject or accept the null hypotheses.1113

Reject or do not reject, that is really the technical idea there.1121

Let us go onto some examples.1126

The Cheesy Cheesy cookies company wanted to know whether they should have a coarse or fine texture in their cheesy cookies.1131

They assembled a series of taste testing panels that tasted either the coarse 1140

or fine textured cookies and gave it a palatability score.1143

The higher score the better.1153

Is there a statistical difference in the mean palatability score between the two texture levels?1154

If you download the examples below and you look under the example 1, you should see a data set that looks like this.1162

This is the palatability score and this is the texture.1174

I believe that 0 = coarse and 1= fine, just so that we can make some sort of recommendation at the end.1177

Here we go, we have these different sets of scores, so this is the score that 1200

one panel came up with and that panel tasted coarse textured cheesy cookies. 1209

This panel also tasted coarse and that is the score it gave it.1214

Let us go up to fine.1221

They tasted fine texture and they give it that score. 1223

They also tasted fine and they give it that score.1227

You could go and see what the different scores are and what texture they had.1231

First, let us think about what our x and y?1240

What are our two independent samples?1245

The two independent samples here seem to come from the two different textures.1247

One group of scores they all tasted coarse texture cheesy cookies.1251

The other group of scores tasted fine textured cheesy cookies.1260

It might be helpful to us to sort this data by texture.1264

I am going to take this and I am going to ask.1270

It would work if I move score over.1281

What I am going to do is just hit sort.1291

Here these are all our coarse cheesy cookie, the palatability scores and here are my fine cheesy cookie palatability scores.1296

Let us think about how we want to approach this problem.1311

First thing we want to do is create some sort of hypothesize population.1315

Our hypothesize population is really going to say that the coarse and 1322

fine textured cheesy cookies there is really no difference between them. 1327

They are the same.1330

The mu sub x bar - y bar should equal 0.1332

The alternative is that they are different from each other in some way. 1337

We do not know which one taste better.1346

Let us just be neutral and say we do not know whether the coarse cheesy cookies 1352

are better than the fine or to fine cheesy cookies are better than the coarse.1358

We want to know whether these palatability scores are different or the same.1364

Let us set a significance level for how different they have to be.1370

Our significance level could be alpha= .05.1377

Finally let us set a decision stage.1386

Here I am going to draw SDOD, can we assume normality?1390

Well, they are different and let us look here.1398

We have 8 scores and 8 scores, the n is low.1405

Technically, we might not be able to do hypothesis testing.1416

Let us say for some reason that your teacher wants you doing anyway. 1424

But one of the things that should come up when you see low n like this is that you should question 1430

whether hypothesis testing is the right way to go because it may not reflect the conditions 1436

that we need to have set before we can assume all the stuff.1446

Just for the problem solving and practice here, let us go with that.1449

But if you want it to be smaller you can tell your instructor the conditions are meet for hypothesis testing.1454

Here we set our little lower n rejection and why do we just go ahead and put in our mu here.1466

It is going to be 0 and it will be helpful to find out that t values out here.1478

Let us go ahead and do that. 1483

What are our critical t?1486

Critical t or the boundaries.1491

In order to find the critical t, we are going to have to find the degrees of freedom, DF of differences. 1494

N sub x we will call x coarses.1503

X will be coarse cheesy cookies and y will be fine.1512

You can use c and f if you want to.1521

This is going to be 8 and this is also 8.1524

The degrees of freedom for each of these is 7 so this is going to be 14.1528

That is a pretty low degrees of freedom.1534

That is all we can assume normality here.1537

Let us find the critical t.1540

In order to find that we would use t inverse because we have the two tailed probability .05 and we have the degrees of freedom.1545

This gives us a positive version.1562

The negative version would just be the negative of that number because they are perfectly symmetrical. 1565

2.14 the critical t is + or -2.14.1573

Now that we have that, then we could go ahead and look at the actual samples themselves. 1581

Step 4, is we need to find the samples mean difference.1589

We need to find x bar – y bar, but we also need to find this mean differences t.1598

The t sub x bar - y bar.1606

We need to find that as well as the p value. 1610

Let us go ahead and do that.1613

We just started from step 3 and step 4 is really the mean difference and that is just the average of these guys - the average of these guys.1618

That is their average difference. 1656

This is saying that the coarse scores tend to be on average lower than 1662

the fine scores because we do course score – fine score.1668

We get a negative number.1671

The coarse score number must have been small.1672

Actually before we go on, it might be helpful to find the standard error of this situation.1677

In order to find the standard error of the difference we need to find 1690

the square roots of the variance of x ÷ n sub x + the variance of y ÷ n sub y.1699

This is going to be our standard error that we need.1717

In order to find that it would be helpful to find each of these pieces by themselves.1724

I guess we could find the whole thing, the variance of x ÷ n sub x and the variance of y ÷ n sub y.1731

I will put each of these on different lines like we can do all of it together.1750

We could just add them all up here.1754

Let us find that.1757

The variance, thankfully Excel has all these functions.1763

Let us check and make sure that this variance will give us n-1.1771

The variance of x ÷ 8 and the variance of all my fine cheesy cookie values ÷ 8.1778

We have these two variances and when we divide by n sub x we are getting the variance of the SDOM.1799

If we add those together then get the square root, then we get the standard error of the difference.1811

The square root of these two guys added together and that is 11.16.1820

Here I will just add this information so the standard error of the difference =11.16.1830

In order to find this t, we need to have this difference between the means -0 / the standard error of the difference. 1851

We can easily do that now. 1866

Here in order to find the sample t we could put the mean difference -0.1871

If you want to keep it technical you do not need that -0 / the standard error of the difference.1891

Our sample t says the difference is not at 0 it is actually way down here.1901

It is not significantly different. 1914

Well, one thing we could do is just operate here and compare this number to this number. 1917

This sub boundary here is -2.14.1923

-4.73 is like out here so we definitely know it is way significant.1928

It is way standing out from the expected mean but we can also find the p value. 1935

Now remember in Excel one of the things it needs a positive t value. 1944

If you have a negative t value you have to turn it into a positive one, but it is okay because it is perfectly symmetrical. 1951

The degrees of freedom that we are talking about are going to be this 1959

new combined degrees of freedom because we are always talking that the SDOM now.1963

This is the degrees of freedom for this SDOD and that is 14 and it is a two-tailed hypothesis.1969

Our p value is .0003.1976

I will not write the last up here but we can just talk about it.1981

The last step would be we reject or do not reject the null. 1991

Well, we reject the null here because our t value is much lower than our significance level.1997

Our t value, our sample t is more extreme than our critical t.2003

Here what we would say is that there is a statistical difference between the two texture levels.2010

One that is very unlikely to be attributed to by chance, because that is what this t values.2018

If it was by chance it would have .03% probability.2026

It is pretty low.2033

Example 2, scientists have found certain tree resins that are deadly to termites.2035

To test the protective power of resin protecting the tree, a lab prepared 16 dishes with 25 termites in each.2042

Each dish was randomly assigned to be treated with 5 mg or 10 mg of resin.2050

At the end of 15 days, the number of surviving termites was counted.2055

Assume that termites survival tends to be normally distributed with both dosage levels.2060

Is there a statistical significant difference in the mean number of survival for those two doses?2066

Now here I think it is worth than just discussing what will be our x and y.2072

Our x might be the 5 mg population and our y might be the 10 mg population.2077

The n sub x some people might think there are 25 termites but actually there is 25 termites in each of 10 Peachtree dishes.2087

There are 8 Peachtree dishes that have been randomly treated with 5 mg and 8 have been treated with 10 mg.2099

This is 8 and 8.2109

When I say 8, we mean the dishes of treatment and the termites are not the subject they are the cases that we are interested in.2113

The termites are the test.2124

You can get 25 termites surviving or you could get 0 surviving.2128

How many termites survived?2134

That is our dependent variable.2135

Okay, let us see. 2137

Well one thing we could do is start off with our hypotheses.2142

Our null hypotheses is that these two dosage levels are roughly the same.2146

We might say something like the mu sub x bar - y bar which is equal 0 are the same.2153

The alternative is that they are not the same. 2161

Maybe that one is more powerful than the other.2166

We do not know which one.2169

We could easily set our significance level to be .05.2173

Let us talk about the actual set up, the decision stage.2179

In the decision stage, let us see what we have here.2184

We have set up this .05 level rejection and we could just go ahead and this is the x bar - y bar, but what would be that t?2195

The nice thing about this being 0 is that the t distribution as well as the x bar – y bar start off the same.2213

They are not going to have the same numbers out here. 2226

Okay, so that is why we do have to put them on different lines.2229

They are still talking about different things.2233

Let us talk about the t values.2235

Before we do, it might be helpful to figure out the new degrees of freedom.2240

The degrees of freedom of differences will be 7 + 7 =14.2247

Here we can do hypothesis testing just jump in right away because given 2255

the termite survival tends to be normally distributed within these two dosage rates.2261

If you go to example 2, you will actually see the data here.2267

Here we see dosage and here is the 5 mg, as well as the 10 mg.2284

Here are the survival counts.2293

How many termites survived?2294

Notice that there is no survival count over 25.2296

25 is the maximum you can have, but even the highest gives me 16.2299

What if the survival count cannot go below 0 because we cannot have negative termite surviving.2304

Here we have the survival count.2311

Let us see what we have here.2317

Can we figure out what the critical t is.2323

Can we figure out what the critical t is?2329

I think we can.2335

Let us see.2336

You can use the book but I am going to use Excel to find the critical t.2338

I am going to write for myself step 4.2344

I know the two-tailed probability that I need .05 and I know my degrees of freedom is 14.2347

I see that the critical t is the same as before and because we use 2362

the same two tailed probability and the same degrees of freedom of differences.2367

Here we know that it is -2.14, as well as positive 2.14.2372

What we can do is now from here go on to looking at our actual sample.2384

This is actually step 3, it is a part of our decision stage. 2394

Step 4, is now actually talking about the sample. 2406

It will help to find the sample mean difference, so that is going to be the average of one of these x - the average y.2410

We want to know is this is difference going to be significantly different from 0?2431

We cannot just look at the raw scores because we need to figure out how many standard errors away we are.2436

How shall we find the standard error for the difference?2443

That is equal to the square root of the variance of x/ n sub x + variance of y/ n sub y.2448

Let us find the variance of x/ n sub x over and variance of y/ n sub y.2458

Let us find the variance of x/8 and the variance of y /8.2468

We see that the variance for y is a lot different than the variance for x.2486

That is helpful for us to just look at briefly right now just because this will probably give us an idea 2493

that the variance of samples are so different we probably do not have a good reason to pull these two together.2500

We do not have a good reason to assume that the populations are similar.2507

When in doubt go with non homogenous variances. 2511

Just assume that they are different. 2518

Once we have that then we can find the square root of adding these two standard errors together and we get 2.5.2520

Once we have all of that then we can find the samples mean difference t.2535

And that would be the samples mean difference -0 divided by the standard error of the SDOD.2548

What would that be?2572

That would be this guy and I am going to leave that subtract 0 part divided by the standard error and we get to 2.15.2575

We are close but it is still more extreme than 2.14.2586

It does not have to be extreme and the -n could be either extreme in the negative n or extreme in the positive n.2595

This is extreme in the positive n.2603

It is just right outside our borders.2607

Let us find the p value. 2609

In order to find that p value we use t distribution because we have the t value that 2611

we want the degrees of freedom and we wanted to be a two-tailed p value.2620

It is going to add up this little chunk and this little chunk together and that can be .049.2625

We will just skip step 4, our p value =.0449 that is right just a hair underneath our alpha.05.2635

We would probably reject the null.2653

Example 3, 2 months before smoking ban in bars, a random sample of bar employees were assessed on respiratory health.2657

Two months after the ban, another random sample of employees were assessed.2672

Researchers saw a statistically significant increase in the mean scores of health.2678

P= .049 we had an example of that two tailed.2684

Which of the following is the best interpretation for this result?2689

The probability is only .049 that the mean score for all of our employees increased from before to after the ban.2693

Is that what this means?2706

For me it helps to draw that SDOD and it is saying the null hypotheses would be 2708

the same like before and after are the same.2715

What they actually found is that there is some extreme value.2720

There is the increase in mean scores.2727

There is a positive difference from after – before.2735

There is the increased.2742

It is somewhere up here, that increase tells us that.2745

P= .04.2749

We can actually draw this carefully, it is just right above that cut off.2753

There is only .049 probability that the mean score for all bar employees increase.2760

That is not what this means.2775

It is not saying that there is only a small chance that it increase.2778

It is actually saying there is a pretty good chance that it is not the same.2783

There is a pretty small chance that it is the same.2787

This one we can just rule out.2792

Another possibility is that the mean score for all bar employees increased by more than 4.9%.2796

Does this p value actually talk about the raw score on respiratory health?2805

It does not talk about that score at all, it is the probability of finding such a difference.2814

It does not have anything to do with actual scores. 2821

What about this one?2825

An observed difference in the sample means as large or larger than the sample is unlikely to occur 2828

if the mean score for all bar employees before and after the ban were the same.2835

This actually have something we can use.2839

This is about considering that the means score for before and after are the same. 2842

That is important because that is what the SDOM actually represents.2851

That is what this p value is actually talking something about this idea that when we get the sample, 2854

we consider that they were just the same.2865

This is saying an observed difference in sample means as large or larger than a sample is very unlikely to occur.2867

It is likely to occur with .049% if the mean score for all bar employees the true score is actually the same.2876

This is a pretty good contender because the SDOD is talking about how .049 means very unlikely.2889

This I would leave as a definite contender. 2900

Maybe there is a better answer.2902

There is a 4.9% chance that the mean score of all bar employees after the ban is actually lower than before the ban.2905

There is a small chance of the opposite hypotheses picture that is probably not the case.2915

It depends on what the null hypothesis was.2925

The null hypothesis and a two mean hypotheses test is usually the same not the one is less than the other.2934

We do not usually do that.2953

Maybe there is a way and that could be true.2954

It is probably not true if we did hypothesis testing at all.2958

Only 4.9% of the bar employees had their score drop but the other 95% had their scores increase.2961

This would be a correct interpretation if we are not talking about the SDOD.2971

If this was not a reflection of the population then maybe that would be true.2977

This is not talking about population, it is talking about the SDOD.2982

This is a wrong interpretation.2987

The correct answer is c.2990

That is our last example for hypotheses testing with two independent means.2992

Thank you for joining us on www.educator.com.2998