Dr. Ji Son

Dr. Ji Son

Confidence Intervals for the Difference of Two Independent Means

Slide Duration:

Table of Contents

Section 1: Introduction
Descriptive Statistics vs. Inferential Statistics

25m 31s

Intro
0:00
Roadmap
0:10
Roadmap
0:11
Statistics
0:35
Statistics
0:36
Let's Think About High School Science
1:12
Measurement and Find Patterns (Mathematical Formula)
1:13
Statistics = Math of Distributions
4:58
Distributions
4:59
Problematic… but also GREAT
5:58
Statistics
7:33
How is It Different from Other Specializations in Mathematics?
7:34
Statistics is Fundamental in Natural and Social Sciences
7:53
Two Skills of Statistics
8:20
Description (Exploration)
8:21
Inference
9:13
Descriptive Statistics vs. Inferential Statistics: Apply to Distributions
9:58
Descriptive Statistics
9:59
Inferential Statistics
11:05
Populations vs. Samples
12:19
Populations vs. Samples: Is it the Truth?
12:20
Populations vs. Samples: Pros & Cons
13:36
Populations vs. Samples: Descriptive Values
16:12
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:10
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:11
Example 1: Descriptive Statistics vs. Inferential Statistics
19:09
Example 2: Descriptive Statistics vs. Inferential Statistics
20:47
Example 3: Sample, Parameter, Population, and Statistic
21:40
Example 4: Sample, Parameter, Population, and Statistic
23:28
Section 2: About Samples: Cases, Variables, Measurements
About Samples: Cases, Variables, Measurements

32m 14s

Intro
0:00
Data
0:09
Data, Cases, Variables, and Values
0:10
Rows, Columns, and Cells
2:03
Example: Aircrafts
3:52
How Do We Get Data?
5:38
Research: Question and Hypothesis
5:39
Research Design
7:11
Measurement
7:29
Research Analysis
8:33
Research Conclusion
9:30
Types of Variables
10:03
Discrete Variables
10:04
Continuous Variables
12:07
Types of Measurements
14:17
Types of Measurements
14:18
Types of Measurements (Scales)
17:22
Nominal
17:23
Ordinal
19:11
Interval
21:33
Ratio
24:24
Example 1: Cases, Variables, Measurements
25:20
Example 2: Which Scale of Measurement is Used?
26:55
Example 3: What Kind of a Scale of Measurement is This?
27:26
Example 4: Discrete vs. Continuous Variables.
30:31
Section 3: Visualizing Distributions
Introduction to Excel

8m 9s

Intro
0:00
Before Visualizing Distribution
0:10
Excel
0:11
Excel: Organization
0:45
Workbook
0:46
Column x Rows
1:50
Tools: Menu Bar, Standard Toolbar, and Formula Bar
3:00
Excel + Data
6:07
Exce and Data
6:08
Frequency Distributions in Excel

39m 10s

Intro
0:00
Roadmap
0:08
Data in Excel and Frequency Distributions
0:09
Raw Data to Frequency Tables
0:42
Raw Data to Frequency Tables
0:43
Frequency Tables: Using Formulas and Pivot Tables
1:28
Example 1: Number of Births
7:17
Example 2: Age Distribution
20:41
Example 3: Height Distribution
27:45
Example 4: Height Distribution of Males
32:19
Frequency Distributions and Features

25m 29s

Intro
0:00
Roadmap
0:10
Data in Excel, Frequency Distributions, and Features of Frequency Distributions
0:11
Example #1
1:35
Uniform
1:36
Example #2
2:58
Unimodal, Skewed Right, and Asymmetric
2:59
Example #3
6:29
Bimodal
6:30
Example #4a
8:29
Symmetric, Unimodal, and Normal
8:30
Point of Inflection and Standard Deviation
11:13
Example #4b
12:43
Normal Distribution
12:44
Summary
13:56
Uniform, Skewed, Bimodal, and Normal
13:57
Sketch Problem 1: Driver's License
17:34
Sketch Problem 2: Life Expectancy
20:01
Sketch Problem 3: Telephone Numbers
22:01
Sketch Problem 4: Length of Time Used to Complete a Final Exam
23:43
Dotplots and Histograms in Excel

42m 42s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Previously
1:02
Data, Frequency Table, and visualization
1:03
Dotplots
1:22
Dotplots Excel Example
1:23
Dotplots: Pros and Cons
7:22
Pros and Cons of Dotplots
7:23
Dotplots Excel Example Cont.
9:07
Histograms
12:47
Histograms Overview
12:48
Example of Histograms
15:29
Histograms: Pros and Cons
31:39
Pros
31:40
Cons
32:31
Frequency vs. Relative Frequency
32:53
Frequency
32:54
Relative Frequency
33:36
Example 1: Dotplots vs. Histograms
34:36
Example 2: Age of Pennies Dotplot
36:21
Example 3: Histogram of Mammal Speeds
38:27
Example 4: Histogram of Life Expectancy
40:30
Stemplots

12m 23s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
What Sets Stemplots Apart?
0:46
Data Sets, Dotplots, Histograms, and Stemplots
0:47
Example 1: What Do Stemplots Look Like?
1:58
Example 2: Back-to-Back Stemplots
5:00
Example 3: Quiz Grade Stemplot
7:46
Example 4: Quiz Grade & Afterschool Tutoring Stemplot
9:56
Bar Graphs

22m 49s

Intro
0:00
Roadmap
0:05
Roadmap
0:08
Review of Frequency Distributions
0:44
Y-axis and X-axis
0:45
Types of Frequency Visualizations Covered so Far
2:16
Introduction to Bar Graphs
4:07
Example 1: Bar Graph
5:32
Example 1: Bar Graph
5:33
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:07
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:08
Example 2: Create a Frequency Visualization for Gender
14:02
Example 3: Cases, Variables, and Frequency Visualization
16:34
Example 4: What Kind of Graphs are Shown Below?
19:29
Section 4: Summarizing Distributions
Central Tendency: Mean, Median, Mode

38m 50s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
Central Tendency 1
0:56
Way to Summarize a Distribution of Scores
0:57
Mode
1:32
Median
2:02
Mean
2:36
Central Tendency 2
3:47
Mode
3:48
Median
4:20
Mean
5:25
Summation Symbol
6:11
Summation Symbol
6:12
Population vs. Sample
10:46
Population vs. Sample
10:47
Excel Examples
15:08
Finding Mode, Median, and Mean in Excel
15:09
Median vs. Mean
21:45
Effect of Outliers
21:46
Relationship Between Parameter and Statistic
22:44
Type of Measurements
24:00
Which Distributions to Use With
24:55
Example 1: Mean
25:30
Example 2: Using Summation Symbol
29:50
Example 3: Average Calorie Count
32:50
Example 4: Creating an Example Set
35:46
Variability

42m 40s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Variability (or Spread)
0:45
Variability (or Spread)
0:46
Things to Think About
5:45
Things to Think About
5:46
Range, Quartiles and Interquartile Range
6:37
Range
6:38
Interquartile Range
8:42
Interquartile Range Example
10:58
Interquartile Range Example
10:59
Variance and Standard Deviation
12:27
Deviations
12:28
Sum of Squares
14:35
Variance
16:55
Standard Deviation
17:44
Sum of Squares (SS)
18:34
Sum of Squares (SS)
18:35
Population vs. Sample SD
22:00
Population vs. Sample SD
22:01
Population vs. Sample
23:20
Mean
23:21
SD
23:51
Example 1: Find the Mean and Standard Deviation of the Variable Friends in the Excel File
27:21
Example 2: Find the Mean and Standard Deviation of the Tagged Photos in the Excel File
35:25
Example 3: Sum of Squares
38:58
Example 4: Standard Deviation
41:48
Five Number Summary & Boxplots

57m 15s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Summarizing Distributions
0:37
Shape, Center, and Spread
0:38
5 Number Summary
1:14
Boxplot: Visualizing 5 Number Summary
3:37
Boxplot: Visualizing 5 Number Summary
3:38
Boxplots on Excel
9:01
Using 'Stocks' and Using Stacked Columns
9:02
Boxplots on Excel Example
10:14
When are Boxplots Useful?
32:14
Pros
32:15
Cons
32:59
How to Determine Outlier Status
33:24
Rule of Thumb: Upper Limit
33:25
Rule of Thumb: Lower Limit
34:16
Signal Outliers in an Excel Data File Using Conditional Formatting
34:52
Modified Boxplot
48:38
Modified Boxplot
48:39
Example 1: Percentage Values & Lower and Upper Whisker
49:10
Example 2: Boxplot
50:10
Example 3: Estimating IQR From Boxplot
53:46
Example 4: Boxplot and Missing Whisker
54:35
Shape: Calculating Skewness & Kurtosis

41m 51s

Intro
0:00
Roadmap
0:16
Roadmap
0:17
Skewness Concept
1:09
Skewness Concept
1:10
Calculating Skewness
3:26
Calculating Skewness
3:27
Interpreting Skewness
7:36
Interpreting Skewness
7:37
Excel Example
8:49
Kurtosis Concept
20:29
Kurtosis Concept
20:30
Calculating Kurtosis
24:17
Calculating Kurtosis
24:18
Interpreting Kurtosis
29:01
Leptokurtic
29:35
Mesokurtic
30:10
Platykurtic
31:06
Excel Example
32:04
Example 1: Shape of Distribution
38:28
Example 2: Shape of Distribution
39:29
Example 3: Shape of Distribution
40:14
Example 4: Kurtosis
41:10
Normal Distribution

34m 33s

Intro
0:00
Roadmap
0:13
Roadmap
0:14
What is a Normal Distribution
0:44
The Normal Distribution As a Theoretical Model
0:45
Possible Range of Probabilities
3:05
Possible Range of Probabilities
3:06
What is a Normal Distribution
5:07
Can Be Described By
5:08
Properties
5:49
'Same' Shape: Illusion of Different Shape!
7:35
'Same' Shape: Illusion of Different Shape!
7:36
Types of Problems
13:45
Example: Distribution of SAT Scores
13:46
Shape Analogy
19:48
Shape Analogy
19:49
Example 1: The Standard Normal Distribution and Z-Scores
22:34
Example 2: The Standard Normal Distribution and Z-Scores
25:54
Example 3: Sketching and Normal Distribution
28:55
Example 4: Sketching and Normal Distribution
32:32
Standard Normal Distributions & Z-Scores

41m 44s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
A Family of Distributions
0:28
Infinite Set of Distributions
0:29
Transforming Normal Distributions to 'Standard' Normal Distribution
1:04
Normal Distribution vs. Standard Normal Distribution
2:58
Normal Distribution vs. Standard Normal Distribution
2:59
Z-Score, Raw Score, Mean, & SD
4:08
Z-Score, Raw Score, Mean, & SD
4:09
Weird Z-Scores
9:40
Weird Z-Scores
9:41
Excel
16:45
For Normal Distributions
16:46
For Standard Normal Distributions
19:11
Excel Example
20:24
Types of Problems
25:18
Percentage Problem: P(x)
25:19
Raw Score and Z-Score Problems
26:28
Standard Deviation Problems
27:01
Shape Analogy
27:44
Shape Analogy
27:45
Example 1: Deaths Due to Heart Disease vs. Deaths Due to Cancer
28:24
Example 2: Heights of Male College Students
33:15
Example 3: Mean and Standard Deviation
37:14
Example 4: Finding Percentage of Values in a Standard Normal Distribution
37:49
Normal Distribution: PDF vs. CDF

55m 44s

Intro
0:00
Roadmap
0:15
Roadmap
0:16
Frequency vs. Cumulative Frequency
0:56
Frequency vs. Cumulative Frequency
0:57
Frequency vs. Cumulative Frequency
4:32
Frequency vs. Cumulative Frequency Cont.
4:33
Calculus in Brief
6:21
Derivative-Integral Continuum
6:22
PDF
10:08
PDF for Standard Normal Distribution
10:09
PDF for Normal Distribution
14:32
Integral of PDF = CDF
21:27
Integral of PDF = CDF
21:28
Example 1: Cumulative Frequency Graph
23:31
Example 2: Mean, Standard Deviation, and Probability
24:43
Example 3: Mean and Standard Deviation
35:50
Example 4: Age of Cars
49:32
Section 5: Linear Regression
Scatterplots

47m 19s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
Previous Visualizations
0:30
Frequency Distributions
0:31
Compare & Contrast
2:26
Frequency Distributions Vs. Scatterplots
2:27
Summary Values
4:53
Shape
4:54
Center & Trend
6:41
Spread & Strength
8:22
Univariate & Bivariate
10:25
Example Scatterplot
10:48
Shape, Trend, and Strength
10:49
Positive and Negative Association
14:05
Positive and Negative Association
14:06
Linearity, Strength, and Consistency
18:30
Linearity
18:31
Strength
19:14
Consistency
20:40
Summarizing a Scatterplot
22:58
Summarizing a Scatterplot
22:59
Example 1: Gapminder.org, Income x Life Expectancy
26:32
Example 2: Gapminder.org, Income x Infant Mortality
36:12
Example 3: Trend and Strength of Variables
40:14
Example 4: Trend, Strength and Shape for Scatterplots
43:27
Regression

32m 2s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Linear Equations
0:34
Linear Equations: y = mx + b
0:35
Rough Line
5:16
Rough Line
5:17
Regression - A 'Center' Line
7:41
Reasons for Summarizing with a Regression Line
7:42
Predictor and Response Variable
10:04
Goal of Regression
12:29
Goal of Regression
12:30
Prediction
14:50
Example: Servings of Mile Per Year Shown By Age
14:51
Intrapolation
17:06
Extrapolation
17:58
Error in Prediction
20:34
Prediction Error
20:35
Residual
21:40
Example 1: Residual
23:34
Example 2: Large and Negative Residual
26:30
Example 3: Positive Residual
28:13
Example 4: Interpret Regression Line & Extrapolate
29:40
Least Squares Regression

56m 36s

Intro
0:00
Roadmap
0:13
Roadmap
0:14
Best Fit
0:47
Best Fit
0:48
Sum of Squared Errors (SSE)
1:50
Sum of Squared Errors (SSE)
1:51
Why Squared?
3:38
Why Squared?
3:39
Quantitative Properties of Regression Line
4:51
Quantitative Properties of Regression Line
4:52
So How do we Find Such a Line?
6:49
SSEs of Different Line Equations & Lowest SSE
6:50
Carl Gauss' Method
8:01
How Do We Find Slope (b1)
11:00
How Do We Find Slope (b1)
11:01
Hoe Do We Find Intercept
15:11
Hoe Do We Find Intercept
15:12
Example 1: Which of These Equations Fit the Above Data Best?
17:18
Example 2: Find the Regression Line for These Data Points and Interpret It
26:31
Example 3: Summarize the Scatterplot and Find the Regression Line.
34:31
Example 4: Examine the Mean of Residuals
43:52
Correlation

43m 58s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Summarizing a Scatterplot Quantitatively
0:47
Shape
0:48
Trend
1:11
Strength: Correlation ®
1:45
Correlation Coefficient ( r )
2:30
Correlation Coefficient ( r )
2:31
Trees vs. Forest
11:59
Trees vs. Forest
12:00
Calculating r
15:07
Average Product of z-scores for x and y
15:08
Relationship between Correlation and Slope
21:10
Relationship between Correlation and Slope
21:11
Example 1: Find the Correlation between Grams of Fat and Cost
24:11
Example 2: Relationship between r and b1
30:24
Example 3: Find the Regression Line
33:35
Example 4: Find the Correlation Coefficient for this Set of Data
37:37
Correlation: r vs. r-squared

52m 52s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
R-squared
0:44
What is the Meaning of It? Why Squared?
0:45
Parsing Sum of Squared (Parsing Variability)
2:25
SST = SSR + SSE
2:26
What is SST and SSE?
7:46
What is SST and SSE?
7:47
r-squared
18:33
Coefficient of Determination
18:34
If the Correlation is Strong…
20:25
If the Correlation is Strong…
20:26
If the Correlation is Weak…
22:36
If the Correlation is Weak…
22:37
Example 1: Find r-squared for this Set of Data
23:56
Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance?
33:54
Example 3: Why Does r-squared Only Range from 0 to 1
37:29
Example 4: Find the r-squared for This Set of Data
39:55
Transformations of Data

27m 8s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Why Transform?
0:26
Why Transform?
0:27
Shape-preserving vs. Shape-changing Transformations
5:14
Shape-preserving = Linear Transformations
5:15
Shape-changing Transformations = Non-linear Transformations
6:20
Common Shape-Preserving Transformations
7:08
Common Shape-Preserving Transformations
7:09
Common Shape-Changing Transformations
8:59
Powers
9:00
Logarithms
9:39
Change Just One Variable? Both?
10:38
Log-log Transformations
10:39
Log Transformations
14:38
Example 1: Create, Graph, and Transform the Data Set
15:19
Example 2: Create, Graph, and Transform the Data Set
20:08
Example 3: What Kind of Model would You Choose for this Data?
22:44
Example 4: Transformation of Data
25:46
Section 6: Collecting Data in an Experiment
Sampling & Bias

54m 44s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Descriptive vs. Inferential Statistics
1:04
Descriptive Statistics: Data Exploration
1:05
Example
2:03
To tackle Generalization…
4:31
Generalization
4:32
Sampling
6:06
'Good' Sample
6:40
Defining Samples and Populations
8:55
Population
8:56
Sample
11:16
Why Use Sampling?
13:09
Why Use Sampling?
13:10
Goal of Sampling: Avoiding Bias
15:04
What is Bias?
15:05
Where does Bias Come from: Sampling Bias
17:53
Where does Bias Come from: Response Bias
18:27
Sampling Bias: Bias from Bas Sampling Methods
19:34
Size Bias
19:35
Voluntary Response Bias
21:13
Convenience Sample
22:22
Judgment Sample
23:58
Inadequate Sample Frame
25:40
Response Bias: Bias from 'Bad' Data Collection Methods
28:00
Nonresponse Bias
29:31
Questionnaire Bias
31:10
Incorrect Response or Measurement Bias
37:32
Example 1: What Kind of Biases?
40:29
Example 2: What Biases Might Arise?
44:46
Example 3: What Kind of Biases?
48:34
Example 4: What Kind of Biases?
51:43
Sampling Methods

14m 25s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Biased vs. Unbiased Sampling Methods
0:32
Biased Sampling
0:33
Unbiased Sampling
1:13
Probability Sampling Methods
2:31
Simple Random
2:54
Stratified Random Sampling
4:06
Cluster Sampling
5:24
Two-staged Sampling
6:22
Systematic Sampling
7:25
Example 1: Which Type(s) of Sampling was this?
8:33
Example 2: Describe How to Take a Two-Stage Sample from this Book
10:16
Example 3: Sampling Methods
11:58
Example 4: Cluster Sample Plan
12:48
Research Design

53m 54s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Descriptive vs. Inferential Statistics
0:51
Descriptive Statistics: Data Exploration
0:52
Inferential Statistics
1:02
Variables and Relationships
1:44
Variables
1:45
Relationships
2:49
Not Every Type of Study is an Experiment…
4:16
Category I - Descriptive Study
4:54
Category II - Correlational Study
5:50
Category III - Experimental, Quasi-experimental, Non-experimental
6:33
Category III
7:42
Experimental, Quasi-experimental, and Non-experimental
7:43
Why CAN'T the Other Strategies Determine Causation?
10:18
Third-variable Problem
10:19
Directionality Problem
15:49
What Makes Experiments Special?
17:54
Manipulation
17:55
Control (and Comparison)
21:58
Methods of Control
26:38
Holding Constant
26:39
Matching
29:11
Random Assignment
31:48
Experiment Terminology
34:09
'true' Experiment vs. Study
34:10
Independent Variable (IV)
35:16
Dependent Variable (DV)
35:45
Factors
36:07
Treatment Conditions
36:23
Levels
37:43
Confounds or Extraneous Variables
38:04
Blind
38:38
Blind Experiments
38:39
Double-blind Experiments
39:29
How Categories Relate to Statistics
41:35
Category I - Descriptive Study
41:36
Category II - Correlational Study
42:05
Category III - Experimental, Quasi-experimental, Non-experimental
42:43
Example 1: Research Design
43:50
Example 2: Research Design
47:37
Example 3: Research Design
50:12
Example 4: Research Design
52:00
Between and Within Treatment Variability

41m 31s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Experimental Designs
0:51
Experimental Designs: Manipulation & Control
0:52
Two Types of Variability
2:09
Between Treatment Variability
2:10
Within Treatment Variability
3:31
Updated Goal of Experimental Design
5:47
Updated Goal of Experimental Design
5:48
Example: Drugs and Driving
6:56
Example: Drugs and Driving
6:57
Different Types of Random Assignment
11:27
All Experiments
11:28
Completely Random Design
12:02
Randomized Block Design
13:19
Randomized Block Design
15:48
Matched Pairs Design
15:49
Repeated Measures Design
19:47
Between-subject Variable vs. Within-subject Variable
22:43
Completely Randomized Design
22:44
Repeated Measures Design
25:03
Example 1: Design a Completely Random, Matched Pair, and Repeated Measures Experiment
26:16
Example 2: Block Design
31:41
Example 3: Completely Randomized Designs
35:11
Example 4: Completely Random, Matched Pairs, or Repeated Measures Experiments?
39:01
Section 7: Review of Probability Axioms
Sample Spaces

37m 52s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
Why is Probability Involved in Statistics
0:48
Probability
0:49
Can People Tell the Difference between Cheap and Gourmet Coffee?
2:08
Taste Test with Coffee Drinkers
3:37
If No One can Actually Taste the Difference
3:38
If Everyone can Actually Taste the Difference
5:36
Creating a Probability Model
7:09
Creating a Probability Model
7:10
D'Alembert vs. Necker
9:41
D'Alembert vs. Necker
9:42
Problem with D'Alembert's Model
13:29
Problem with D'Alembert's Model
13:30
Covering Entire Sample Space
15:08
Fundamental Principle of Counting
15:09
Where Do Probabilities Come From?
22:54
Observed Data, Symmetry, and Subjective Estimates
22:55
Checking whether Model Matches Real World
24:27
Law of Large Numbers
24:28
Example 1: Law of Large Numbers
27:46
Example 2: Possible Outcomes
30:43
Example 3: Brands of Coffee and Taste
33:25
Example 4: How Many Different Treatments are there?
35:33
Addition Rule for Disjoint Events

20m 29s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Disjoint Events
0:41
Disjoint Events
0:42
Meaning of 'or'
2:39
In Regular Life
2:40
In Math/Statistics/Computer Science
3:10
Addition Rule for Disjoin Events
3:55
If A and B are Disjoint: P (A and B)
3:56
If A and B are Disjoint: P (A or B)
5:15
General Addition Rule
5:41
General Addition Rule
5:42
Generalized Addition Rule
8:31
If A and B are not Disjoint: P (A or B)
8:32
Example 1: Which of These are Mutually Exclusive?
10:50
Example 2: What is the Probability that You will Have a Combination of One Heads and Two Tails?
12:57
Example 3: Engagement Party
15:17
Example 4: Home Owner's Insurance
18:30
Conditional Probability

57m 19s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
'or' vs. 'and' vs. Conditional Probability
1:07
'or' vs. 'and' vs. Conditional Probability
1:08
'and' vs. Conditional Probability
5:57
P (M or L)
5:58
P (M and L)
8:41
P (M|L)
11:04
P (L|M)
12:24
Tree Diagram
15:02
Tree Diagram
15:03
Defining Conditional Probability
22:42
Defining Conditional Probability
22:43
Common Contexts for Conditional Probability
30:56
Medical Testing: Positive Predictive Value
30:57
Medical Testing: Sensitivity
33:03
Statistical Tests
34:27
Example 1: Drug and Disease
36:41
Example 2: Marbles and Conditional Probability
40:04
Example 3: Cards and Conditional Probability
45:59
Example 4: Votes and Conditional Probability
50:21
Independent Events

24m 27s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Independent Events & Conditional Probability
0:26
Non-independent Events
0:27
Independent Events
2:00
Non-independent and Independent Events
3:08
Non-independent and Independent Events
3:09
Defining Independent Events
5:52
Defining Independent Events
5:53
Multiplication Rule
7:29
Previously…
7:30
But with Independent Evens
8:53
Example 1: Which of These Pairs of Events are Independent?
11:12
Example 2: Health Insurance and Probability
15:12
Example 3: Independent Events
17:42
Example 4: Independent Events
20:03
Section 8: Probability Distributions
Introduction to Probability Distributions

56m 45s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Sampling vs. Probability
0:57
Sampling
0:58
Missing
1:30
What is Missing?
3:06
Insight: Probability Distributions
5:26
Insight: Probability Distributions
5:27
What is a Probability Distribution?
7:29
From Sample Spaces to Probability Distributions
8:44
Sample Space
8:45
Probability Distribution of the Sum of Two Die
11:16
The Random Variable
17:43
The Random Variable
17:44
Expected Value
21:52
Expected Value
21:53
Example 1: Probability Distributions
28:45
Example 2: Probability Distributions
35:30
Example 3: Probability Distributions
43:37
Example 4: Probability Distributions
47:20
Expected Value & Variance of Probability Distributions

53m 41s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Discrete vs. Continuous Random Variables
1:04
Discrete vs. Continuous Random Variables
1:05
Mean and Variance Review
4:44
Mean: Sample, Population, and Probability Distribution
4:45
Variance: Sample, Population, and Probability Distribution
9:12
Example Situation
14:10
Example Situation
14:11
Some Special Cases…
16:13
Some Special Cases…
16:14
Linear Transformations
19:22
Linear Transformations
19:23
What Happens to Mean and Variance of the Probability Distribution?
20:12
n Independent Values of X
25:38
n Independent Values of X
25:39
Compare These Two Situations
30:56
Compare These Two Situations
30:57
Two Random Variables, X and Y
32:02
Two Random Variables, X and Y
32:03
Example 1: Expected Value & Variance of Probability Distributions
35:35
Example 2: Expected Values & Standard Deviation
44:17
Example 3: Expected Winnings and Standard Deviation
48:18
Binomial Distribution

55m 15s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Discrete Probability Distributions
1:42
Discrete Probability Distributions
1:43
Binomial Distribution
2:36
Binomial Distribution
2:37
Multiplicative Rule Review
6:54
Multiplicative Rule Review
6:55
How Many Outcomes with k 'Successes'
10:23
Adults and Bachelor's Degree: Manual List of Outcomes
10:24
P (X=k)
19:37
Putting Together # of Outcomes with the Multiplicative Rule
19:38
Expected Value and Standard Deviation in a Binomial Distribution
25:22
Expected Value and Standard Deviation in a Binomial Distribution
25:23
Example 1: Coin Toss
33:42
Example 2: College Graduates
38:03
Example 3: Types of Blood and Probability
45:39
Example 4: Expected Number and Standard Deviation
51:11
Section 9: Sampling Distributions of Statistics
Introduction to Sampling Distributions

48m 17s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Probability Distributions vs. Sampling Distributions
0:55
Probability Distributions vs. Sampling Distributions
0:56
Same Logic
3:55
Logic of Probability Distribution
3:56
Example: Rolling Two Die
6:56
Simulating Samples
9:53
To Come Up with Probability Distributions
9:54
In Sampling Distributions
11:12
Connecting Sampling and Research Methods with Sampling Distributions
12:11
Connecting Sampling and Research Methods with Sampling Distributions
12:12
Simulating a Sampling Distribution
14:14
Experimental Design: Regular Sleep vs. Less Sleep
14:15
Logic of Sampling Distributions
23:08
Logic of Sampling Distributions
23:09
General Method of Simulating Sampling Distributions
25:38
General Method of Simulating Sampling Distributions
25:39
Questions that Remain
28:45
Questions that Remain
28:46
Example 1: Mean and Standard Error of Sampling Distribution
30:57
Example 2: What is the Best Way to Describe Sampling Distributions?
37:12
Example 3: Matching Sampling Distributions
38:21
Example 4: Mean and Standard Error of Sampling Distribution
41:51
Sampling Distribution of the Mean

1h 8m 48s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Special Case of General Method for Simulating a Sampling Distribution
1:53
Special Case of General Method for Simulating a Sampling Distribution
1:54
Computer Simulation
3:43
Using Simulations to See Principles behind Shape of SDoM
15:50
Using Simulations to See Principles behind Shape of SDoM
15:51
Conditions
17:38
Using Simulations to See Principles behind Center (Mean) of SDoM
20:15
Using Simulations to See Principles behind Center (Mean) of SDoM
20:16
Conditions: Does n Matter?
21:31
Conditions: Does Number of Simulation Matter?
24:37
Using Simulations to See Principles behind Standard Deviation of SDoM
27:13
Using Simulations to See Principles behind Standard Deviation of SDoM
27:14
Conditions: Does n Matter?
34:45
Conditions: Does Number of Simulation Matter?
36:24
Central Limit Theorem
37:13
SHAPE
38:08
CENTER
39:34
SPREAD
39:52
Comparing Population, Sample, and SDoM
43:10
Comparing Population, Sample, and SDoM
43:11
Answering the 'Questions that Remain'
48:24
What Happens When We Don't Know What the Population Looks Like?
48:25
Can We Have Sampling Distributions for Summary Statistics Other than the Mean?
49:42
How Do We Know whether a Sample is Sufficiently Unlikely?
53:36
Do We Always Have to Simulate a Large Number of Samples in Order to get a Sampling Distribution?
54:40
Example 1: Mean Batting Average
55:25
Example 2: Mean Sampling Distribution and Standard Error
59:07
Example 3: Sampling Distribution of the Mean
1:01:04
Sampling Distribution of Sample Proportions

54m 37s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Intro to Sampling Distribution of Sample Proportions (SDoSP)
0:51
Categorical Data (Examples)
0:52
Wish to Estimate Proportion of Population from Sample…
2:00
Notation
3:34
Population Proportion and Sample Proportion Notations
3:35
What's the Difference?
9:19
SDoM vs. SDoSP: Type of Data
9:20
SDoM vs. SDoSP: Shape
11:24
SDoM vs. SDoSP: Center
12:30
SDoM vs. SDoSP: Spread
15:34
Binomial Distribution vs. Sampling Distribution of Sample Proportions
19:14
Binomial Distribution vs. SDoSP: Type of Data
19:17
Binomial Distribution vs. SDoSP: Shape
21:07
Binomial Distribution vs. SDoSP: Center
21:43
Binomial Distribution vs. SDoSP: Spread
24:08
Example 1: Sampling Distribution of Sample Proportions
26:07
Example 2: Sampling Distribution of Sample Proportions
37:58
Example 3: Sampling Distribution of Sample Proportions
44:42
Example 4: Sampling Distribution of Sample Proportions
45:57
Section 10: Inferential Statistics
Introduction to Confidence Intervals

42m 53s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Inferential Statistics
0:50
Inferential Statistics
0:51
Two Problems with This Picture…
3:20
Two Problems with This Picture…
3:21
Solution: Confidence Intervals (CI)
4:59
Solution: Hypotheiss Testing (HT)
5:49
Which Parameters are Known?
6:45
Which Parameters are Known?
6:46
Confidence Interval - Goal
7:56
When We Don't Know m but know s
7:57
When We Don't Know
18:27
When We Don't Know m nor s
18:28
Example 1: Confidence Intervals
26:18
Example 2: Confidence Intervals
29:46
Example 3: Confidence Intervals
32:18
Example 4: Confidence Intervals
38:31
t Distributions

1h 2m 6s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
When to Use z vs. t?
1:07
When to Use z vs. t?
1:08
What is z and t?
3:02
z-score and t-score: Commonality
3:03
z-score and t-score: Formulas
3:34
z-score and t-score: Difference
5:22
Why not z? (Why t?)
7:24
Why not z? (Why t?)
7:25
But Don't Worry!
15:13
Gossett and t-distributions
15:14
Rules of t Distributions
17:05
t-distributions are More Normal as n Gets Bigger
17:06
t-distributions are a Family of Distributions
18:55
Degrees of Freedom (df)
20:02
Degrees of Freedom (df)
20:03
t Family of Distributions
24:07
t Family of Distributions : df = 2 , 4, and 60
24:08
df = 60
29:16
df = 2
29:59
How to Find It?
31:01
'Student's t-distribution' or 't-distribution'
31:02
Excel Example
33:06
Example 1: Which Distribution Do You Use? Z or t?
45:26
Example 2: Friends on Facebook
47:41
Example 3: t Distributions
52:15
Example 4: t Distributions , confidence interval, and mean
55:59
Introduction to Hypothesis Testing

1h 6m 33s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Issues to Overcome in Inferential Statistics
1:35
Issues to Overcome in Inferential Statistics
1:36
What Happens When We Don't Know What the Population Looks Like?
2:57
How Do We Know whether a sample is Sufficiently Unlikely
3:43
Hypothesizing a Population
6:44
Hypothesizing a Population
6:45
Null Hypothesis
8:07
Alternative Hypothesis
8:56
Hypotheses
11:58
Hypotheses
11:59
Errors in Hypothesis Testing
14:22
Errors in Hypothesis Testing
14:23
Steps of Hypothesis Testing
21:15
Steps of Hypothesis Testing
21:16
Single Sample HT ( When Sigma Available)
26:08
Example: Average Facebook Friends
26:09
Step1
27:08
Step 2
27:58
Step 3
28:17
Step 4
32:18
Single Sample HT (When Sigma Not Available)
36:33
Example: Average Facebook Friends
36:34
Step1: Hypothesis Testing
36:58
Step 2: Significance Level
37:25
Step 3: Decision Stage
37:40
Step 4: Sample
41:36
Sigma and p-value
45:04
Sigma and p-value
45:05
On tailed vs. Two Tailed Hypotheses
45:51
Example 1: Hypothesis Testing
48:37
Example 2: Heights of Women in the US
57:43
Example 3: Select the Best Way to Complete This Sentence
1:03:23
Confidence Intervals for the Difference of Two Independent Means

55m 14s

Intro
0:00
Roadmap
0:14
Roadmap
0:15
One Mean vs. Two Means
1:17
One Mean vs. Two Means
1:18
Notation
2:41
A Sample! A Set!
2:42
Mean of X, Mean of Y, and Difference of Two Means
3:56
SE of X
4:34
SE of Y
6:28
Sampling Distribution of the Difference between Two Means (SDoD)
7:48
Sampling Distribution of the Difference between Two Means (SDoD)
7:49
Rules of the SDoD (similar to CLT!)
15:00
Mean for the SDoD Null Hypothesis
15:01
Standard Error
17:39
When can We Construct a CI for the Difference between Two Means?
21:28
Three Conditions
21:29
Finding CI
23:56
One Mean CI
23:57
Two Means CI
25:45
Finding t
29:16
Finding t
29:17
Interpreting CI
30:25
Interpreting CI
30:26
Better Estimate of s (s pool)
34:15
Better Estimate of s (s pool)
34:16
Example 1: Confidence Intervals
42:32
Example 2: SE of the Difference
52:36
Hypothesis Testing for the Difference of Two Independent Means

50m

Intro
0:00
Roadmap
0:06
Roadmap
0:07
The Goal of Hypothesis Testing
0:56
One Sample and Two Samples
0:57
Sampling Distribution of the Difference between Two Means (SDoD)
3:42
Sampling Distribution of the Difference between Two Means (SDoD)
3:43
Rules of the SDoD (Similar to CLT!)
6:46
Shape
6:47
Mean for the Null Hypothesis
7:26
Standard Error for Independent Samples (When Variance is Homogenous)
8:18
Standard Error for Independent Samples (When Variance is not Homogenous)
9:25
Same Conditions for HT as for CI
10:08
Three Conditions
10:09
Steps of Hypothesis Testing
11:04
Steps of Hypothesis Testing
11:05
Formulas that Go with Steps of Hypothesis Testing
13:21
Step 1
13:25
Step 2
14:18
Step 3
15:00
Step 4
16:57
Example 1: Hypothesis Testing for the Difference of Two Independent Means
18:47
Example 2: Hypothesis Testing for the Difference of Two Independent Means
33:55
Example 3: Hypothesis Testing for the Difference of Two Independent Means
44:22
Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means

1h 14m 11s

Intro
0:00
Roadmap
0:09
Roadmap
0:10
The Goal of Hypothesis Testing
1:27
One Sample and Two Samples
1:28
Independent Samples vs. Paired Samples
3:16
Independent Samples vs. Paired Samples
3:17
Which is Which?
5:20
Independent SAMPLES vs. Independent VARIABLES
7:43
independent SAMPLES vs. Independent VARIABLES
7:44
T-tests Always…
10:48
T-tests Always…
10:49
Notation for Paired Samples
12:59
Notation for Paired Samples
13:00
Steps of Hypothesis Testing for Paired Samples
16:13
Steps of Hypothesis Testing for Paired Samples
16:14
Rules of the SDoD (Adding on Paired Samples)
18:03
Shape
18:04
Mean for the Null Hypothesis
18:31
Standard Error for Independent Samples (When Variance is Homogenous)
19:25
Standard Error for Paired Samples
20:39
Formulas that go with Steps of Hypothesis Testing
22:59
Formulas that go with Steps of Hypothesis Testing
23:00
Confidence Intervals for Paired Samples
30:32
Confidence Intervals for Paired Samples
30:33
Example 1: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
32:28
Example 2: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
44:02
Example 3: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
52:23
Type I and Type II Errors

31m 27s

Intro
0:00
Roadmap
0:18
Roadmap
0:19
Errors and Relationship to HT and the Sample Statistic?
1:11
Errors and Relationship to HT and the Sample Statistic?
1:12
Instead of a Box…Distributions!
7:00
One Sample t-test: Friends on Facebook
7:01
Two Sample t-test: Friends on Facebook
13:46
Usually, Lots of Overlap between Null and Alternative Distributions
16:59
Overlap between Null and Alternative Distributions
17:00
How Distributions and 'Box' Fit Together
22:45
How Distributions and 'Box' Fit Together
22:46
Example 1: Types of Errors
25:54
Example 2: Types of Errors
27:30
Example 3: What is the Danger of the Type I Error?
29:38
Effect Size & Power

44m 41s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Distance between Distributions: Sample t
0:49
Distance between Distributions: Sample t
0:50
Problem with Distance in Terms of Standard Error
2:56
Problem with Distance in Terms of Standard Error
2:57
Test Statistic (t) vs. Effect Size (d or g)
4:38
Test Statistic (t) vs. Effect Size (d or g)
4:39
Rules of Effect Size
6:09
Rules of Effect Size
6:10
Why Do We Need Effect Size?
8:21
Tells You the Practical Significance
8:22
HT can be Deceiving…
10:25
Important Note
10:42
What is Power?
11:20
What is Power?
11:21
Why Do We Need Power?
14:19
Conditional Probability and Power
14:20
Power is:
16:27
Can We Calculate Power?
19:00
Can We Calculate Power?
19:01
How Does Alpha Affect Power?
20:36
How Does Alpha Affect Power?
20:37
How Does Effect Size Affect Power?
25:38
How Does Effect Size Affect Power?
25:39
How Does Variability and Sample Size Affect Power?
27:56
How Does Variability and Sample Size Affect Power?
27:57
How Do We Increase Power?
32:47
Increasing Power
32:48
Example 1: Effect Size & Power
35:40
Example 2: Effect Size & Power
37:38
Example 3: Effect Size & Power
40:55
Section 11: Analysis of Variance
F-distributions

24m 46s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
Z- & T-statistic and Their Distribution
0:34
Z- & T-statistic and Their Distribution
0:35
F-statistic
4:55
The F Ration ( the Variance Ratio)
4:56
F-distribution
12:29
F-distribution
12:30
s and p-value
15:00
s and p-value
15:01
Example 1: Why Does F-distribution Stop At 0 But Go On Until Infinity?
18:33
Example 2: F-distributions
19:29
Example 3: F-distributions and Heights
21:29
ANOVA with Independent Samples

1h 9m 25s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
The Limitations of t-tests
1:12
The Limitations of t-tests
1:13
Two Major Limitations of Many t-tests
3:26
Two Major Limitations of Many t-tests
3:27
Ronald Fisher's Solution… F-test! New Null Hypothesis
4:43
Ronald Fisher's Solution… F-test! New Null Hypothesis (Omnibus Test - One Test to Rule Them All!)
4:44
Analysis of Variance (ANoVA) Notation
7:47
Analysis of Variance (ANoVA) Notation
7:48
Partitioning (Analyzing) Variance
9:58
Total Variance
9:59
Within-group Variation
14:00
Between-group Variation
16:22
Time out: Review Variance & SS
17:05
Time out: Review Variance & SS
17:06
F-statistic
19:22
The F Ratio (the Variance Ratio)
19:23
S²bet = SSbet / dfbet
22:13
What is This?
22:14
How Many Means?
23:20
So What is the dfbet?
23:38
So What is SSbet?
24:15
S²w = SSw / dfw
26:05
What is This?
26:06
How Many Means?
27:20
So What is the dfw?
27:36
So What is SSw?
28:18
Chart of Independent Samples ANOVA
29:25
Chart of Independent Samples ANOVA
29:26
Example 1: Who Uploads More Photos: Unknown Ethnicity, Latino, Asian, Black, or White Facebook Users?
35:52
Hypotheses
35:53
Significance Level
39:40
Decision Stage
40:05
Calculate Samples' Statistic and p-Value
44:10
Reject or Fail to Reject H0
55:54
Example 2: ANOVA with Independent Samples
58:21
Repeated Measures ANOVA

1h 15m 13s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
The Limitations of t-tests
0:36
Who Uploads more Pictures and Which Photo-Type is Most Frequently Used on Facebook?
0:37
ANOVA (F-test) to the Rescue!
5:49
Omnibus Hypothesis
5:50
Analyze Variance
7:27
Independent Samples vs. Repeated Measures
9:12
Same Start
9:13
Independent Samples ANOVA
10:43
Repeated Measures ANOVA
12:00
Independent Samples ANOVA
16:00
Same Start: All the Variance Around Grand Mean
16:01
Independent Samples
16:23
Repeated Measures ANOVA
18:18
Same Start: All the Variance Around Grand Mean
18:19
Repeated Measures
18:33
Repeated Measures F-statistic
21:22
The F Ratio (The Variance Ratio)
21:23
S²bet = SSbet / dfbet
23:07
What is This?
23:08
How Many Means?
23:39
So What is the dfbet?
23:54
So What is SSbet?
24:32
S² resid = SS resid / df resid
25:46
What is This?
25:47
So What is SS resid?
26:44
So What is the df resid?
27:36
SS subj and df subj
28:11
What is This?
28:12
How Many Subject Means?
29:43
So What is df subj?
30:01
So What is SS subj?
30:09
SS total and df total
31:42
What is This?
31:43
What is the Total Number of Data Points?
32:02
So What is df total?
32:34
so What is SS total?
32:47
Chart of Repeated Measures ANOVA
33:19
Chart of Repeated Measures ANOVA: F and Between-samples Variability
33:20
Chart of Repeated Measures ANOVA: Total Variability, Within-subject (case) Variability, Residual Variability
35:50
Example 1: Which is More Prevalent on Facebook: Tagged, Uploaded, Mobile, or Profile Photos?
40:25
Hypotheses
40:26
Significance Level
41:46
Decision Stage
42:09
Calculate Samples' Statistic and p-Value
46:18
Reject or Fail to Reject H0
57:55
Example 2: Repeated Measures ANOVA
58:57
Example 3: What's the Problem with a Bunch of Tiny t-tests?
1:13:59
Section 12: Chi-square Test
Chi-Square Goodness-of-Fit Test

58m 23s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Where Does the Chi-Square Test Belong?
0:50
Where Does the Chi-Square Test Belong?
0:51
A New Twist on HT: Goodness-of-Fit
7:23
HT in General
7:24
Goodness-of-Fit HT
8:26
Hypotheses about Proportions
12:17
Null Hypothesis
12:18
Alternative Hypothesis
13:23
Example
14:38
Chi-Square Statistic
17:52
Chi-Square Statistic
17:53
Chi-Square Distributions
24:31
Chi-Square Distributions
24:32
Conditions for Chi-Square
28:58
Condition 1
28:59
Condition 2
30:20
Condition 3
30:32
Condition 4
31:47
Example 1: Chi-Square Goodness-of-Fit Test
32:23
Example 2: Chi-Square Goodness-of-Fit Test
44:34
Example 3: Which of These Statements Describe Properties of the Chi-Square Goodness-of-Fit Test?
56:06
Chi-Square Test of Homogeneity

51m 36s

Intro
0:00
Roadmap
0:09
Roadmap
0:10
Goodness-of-Fit vs. Homogeneity
1:13
Goodness-of-Fit HT
1:14
Homogeneity
2:00
Analogy
2:38
Hypotheses About Proportions
5:00
Null Hypothesis
5:01
Alternative Hypothesis
6:11
Example
6:33
Chi-Square Statistic
10:12
Same as Goodness-of-Fit Test
10:13
Set Up Data
12:28
Setting Up Data Example
12:29
Expected Frequency
16:53
Expected Frequency
16:54
Chi-Square Distributions & df
19:26
Chi-Square Distributions & df
19:27
Conditions for Test of Homogeneity
20:54
Condition 1
20:55
Condition 2
21:39
Condition 3
22:05
Condition 4
22:23
Example 1: Chi-Square Test of Homogeneity
22:52
Example 2: Chi-Square Test of Homogeneity
32:10
Section 13: Overview of Statistics
Overview of Statistics

18m 11s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
The Statistical Tests (HT) We've Covered
0:28
The Statistical Tests (HT) We've Covered
0:29
Organizing the Tests We've Covered…
1:08
One Sample: Continuous DV and Categorical DV
1:09
Two Samples: Continuous DV and Categorical DV
5:41
More Than Two Samples: Continuous DV and Categorical DV
8:21
The Following Data: OK Cupid
10:10
The Following Data: OK Cupid
10:11
Example 1: Weird-MySpace-Angle Profile Photo
10:38
Example 2: Geniuses
12:30
Example 3: Promiscuous iPhone Users
13:37
Example 4: Women, Aging, and Messaging
16:07
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Statistics
Bookmark & Share Embed

Share this knowledge with your friends!

Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
  ×
  • - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.
  • Discussion

  • Answer Engine

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (3)

0 answers

Post by Terry Kim on October 20, 2015

why are we adding df and variances when we are actually calculating the DIFFERENCE? H(null): mu_(x-y) = 0 here it is 0 because it is the difference
but I don't get why we add the dfs and variances if its S_(x-y) isn't it also should be sqrt(s^2_(x)-s^2(y))??

0 answers

Post by Professor Son on November 12, 2014

Just for students who happen to have a class with me, I don't emphasize s-pool a lot because typically it's more conservative to assume that they are separate. If you take a more advanced statistics class, you could learn about hypothesis testing that allows us to infer whether we can pool standard deviations together.

0 answers

Post by Professor Son on November 12, 2014

In the section about s-pool, I accidentally refer to SE as "sample error" but what I meant to say was "standard error."

Confidence Intervals for the Difference of Two Independent Means

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Roadmap 0:14
    • Roadmap
  • One Mean vs. Two Means 1:17
    • One Mean vs. Two Means
  • Notation 2:41
    • A Sample! A Set!
    • Mean of X, Mean of Y, and Difference of Two Means
    • SE of X
    • SE of Y
  • Sampling Distribution of the Difference between Two Means (SDoD) 7:48
    • Sampling Distribution of the Difference between Two Means (SDoD)
  • Rules of the SDoD (similar to CLT!) 15:00
    • Mean for the SDoD Null Hypothesis
    • Standard Error
  • When can We Construct a CI for the Difference between Two Means? 21:28
    • Three Conditions
  • Finding CI 23:56
    • One Mean CI
    • Two Means CI
  • Finding t 29:16
    • Finding t
  • Interpreting CI 30:25
    • Interpreting CI
  • Better Estimate of s (s pool) 34:15
    • Better Estimate of s (s pool)
  • Example 1: Confidence Intervals 42:32
  • Example 2: SE of the Difference 52:36

Transcription: Confidence Intervals for the Difference of Two Independent Means

Hi and welcome to www.educator.com.0000

Today we are going to talk about confidence intervals for the difference of two independent means.0002

It is pretty important that there are for independent means because later we are going to go to non-independent or error means.0007

We have been talking about how to find confidence intervals and hypothesis testing for one mean.0013

We are going to talk about what that means for how we go about doing that for two means.0023

We are going to talk about what two means means?0029

We are going to talk a little bit about mu notation and we are going to talk about sampling distribution of the difference between two means.0032

I am going to shorten this, this is just means this is not like official or anything as SDOD0041

because it is long to say assembling distribution of the difference between two means, but that is what I mean.0048

We will talk about the rules of the SDOD and those are going to be very similar to the CLT (the central limit theorem) with just a few differences.0055

Finally, we all set it all up so that we can find and interpret the confidence interval.0066

One mean versus two means.0075

So far we have only looked at how to compare one mean against some population, but that is not usually how scientific studies go.0081

Most scientific studies involve comparisons.0091

Comparisons either between different kinds of water samples or language acquisition for babies versus babies who did not.0093

Scores from the control group versus the experimental group.0102

In science we are often comparing two different sets of the two different samples.0106

Two means really means two samples.0112

Here in the one mean scenarios we have one sample and we compare that to an idea in hypothesis testing0120

or we use that one sample in order to derive the potential population means.0132

But now we are going to be using two different means.0140

What do we do with those two means?0143

Do we just do the one sample thing two times or is there a different way?0145

Actually, there is different and more efficient way to go about this.0152

Two means is a different story.0155

They are related but different story.0159

In order to talk about two means and two samples, we have to talk about some new notation.0162

This is totally arbitrary that we use x and y.0170

You could use j and k or m and n, whatever you want.0176

X and y is the generic variables that we use.0182

Feel free to use your favorite letters.0189

One sample will just be called x and all of its members in the sample will be x sub 1, x sub 2, x sub 3.0191

When we say x sub I, we are talking about all of these little guys.0203

The other sample we do not just call it x as well because we will get confused.0208

We cannot call it x2 because x sub 2 has a meaning.0216

What we call it is y.0221

Y sub i now means all of these guys.0224

We could keep them separate.0229

In fact this x and y is going to follow us from here on out.0232

For instance when we talk about the mean of x we call it the x bar.0236

What would be the mean of y?0241

Maybe y bar right.0243

That makes sense.0246

And if you call this b, this will be b bar.0247

It just follows you.0253

When we are talking about the difference between two means we are always talking about this difference.0256

That is going to be x bar - y bar.0264

Now you could also do y bar - x bar, it does not matter.0267

But definitely mean by the difference between two means.0271

We could talk about the standard error of all whole bunch of x bars, standard error of x, standard error of y.0274

You could also talk about the variance of x and the variance of y.0285

You can have all kinds of thing they need something to denote that they are little different.0292

That standard error of x sort and another way you could write it is that we are not just talking about standard error.0298

When we say standard error, you need to keep in mind if we double-click on it that means the standard deviation of a whole bunch of means.0312

Standard deviation of a whole bunch of x bars.0322

Sometimes we do not have sigma so we cannot get this value.0328

We might have to estimate sigma from s and that would be s sub x bar.0334

If we wanted to know how to get this that would just be s sub x.0345

Notice that is different from this, but this is the standard error and this is the actual standard deviation of your sample ÷ √n.0353

Not just n the n of your sample x.0367

In this way we could perfectly denote that we are talking about the standard error of the x, the standard deviation of the x, and the n(x).0372

You could do the same thing with y.0387

The standard error of y, if you had sigma, you can just call it sigma sub y bar because it is the standard deviation of a whole bunch of y bars.0390

Or if you do not have sigma you could estimate sigma and use s sub y bar.0402

Instead of just getting the standard deviation of x we would get the standard deviation of y and divide that by √n Sub y.0411

It makes everything a little more complicated because now I have to write sub x and sub y after everything.0423

But it is not hard because the formula if you look remains exactly the same.0430

The only thing that is different now is that we just add a little pointer to say we are talking0438

about the standard deviation of our x sample or standard deviation of our y sample.0446

Even this looks a little more complicated, deep down at the heart of the structure it is still the standard error equals standard deviation of the sample ÷√n.0452

Let us talk about what this means, the sampling distribution of the difference between two means.0466

Let us first start with the population level.0477

When we talk about the population right now we do not know anything about the population.0480

We do not know if it is uniform, the mean, standard deviation.0491

Let us call this one x and this one y.0500

From this x population and this y population we are going to draw out samples and0507

create the sampling distribution and that is the SDOM (the sampling distribution of the mean).0514

Here is a whole bunch of x bars and here is a whole bunch of y bars.0522

Thanks to the central limit theorem if we have big enough n and all that stuff then we know that we could assume normality.0530

Here we know a little bit more than we know about the population.0540

We know that in the SDOM, the standard error, I will write s from here because0545

we are basically going to assume real life examples when we do not have the population standard deviation.0557

The only time we get that is like in problems given to you in statistics textbook.0565

We will call it s sub x bar and that can be the standard deviation of x/√n sub x.0570

We know those things and we also know the standard error of y and that is going to be the standard deviation of y ÷ √n sub y.0585

Because of that you do not write s sub y again because that would not make sense that0601

the standard error would equal the standard error over into something else.0607

That would not quite make sense.0612

You want to make sure that you keep this s special and different because standard error0614

is talking about entirely different idea than the standard deviation.0621

Now that we have two SDOM if we just decided to do this then we would not need to know anything new about creating a confidence interval of two means.0625

You what just create two separate confidence intervals like you consider that x bar,0638

consider that y bar, construct a 95% confidence interval for both of these guys.0644

You are done.0649

Actually what we want is not a sampling distribution of two means and get two sampling distributions.0650

We would like one sampling distribution of the difference between two means.0661

That is what I am going to call SDOD.0668

Here is what you have to imagine, in order to get the SDOM what we had to do is go to the population and draw out samples of size n and plot the means.0671

Do that millions and millions of times.0682

That is what we had to do here.0685

We also have to do that here, we want the entire population of y pulled out samples and plotted the means until we got this distribution of means.0687

Imagine pulling out a mean from here randomly and then finding the difference of those means and plotting that difference down here.0699

Do that over and over again.0715

You would start to get a distribution of the difference of these two means.0718

You would get a distribution of a whole bunch of x bar - y bar.0727

That is what this distribution looks like and that distribution looks normal.0734

This is actually one of the principle of probability distributions that we have covered before.0742

I think we have covered it in binomial distributions.0747

I know this is not a binomial distribution but the same principles apply here where if you draw from two normally distributed population0749

and subtract those from each other you will get a normal distribution down here.0764

We have this thing and what we now want to find is not just the mu sub x bar or mu sub y bar, that is not what we want to find.0769

What we want to find is something like the mu of x bar - y bar because this is our x bar - y bar and we want to find the mu of that.0783

Not only that but we also want to find the standard error of this thing.0796

I think we can figure out what that y might be.0800

At least the notation for it, that would be the standard error.0807

Standard error always have these x bar and y bar things.0812

This is how you notate the standard deviation of x bar - y bar and that is called0817

the standard error of the difference and that is a shortcut way of saying x bar - y bar.0829

We could just say of the difference.0837

You can think of this as the sampling distribution of a whole bunch of differences of means.0839

In order to find this, again it draws back on probability principles but actually let us go to variance first.0845

If we talk about the variance of this distribution that is going to be the variance of x bar + the variance of y bar.0856

If you go back to your probability principles you will see why.0869

This from this we could actually figure out standard error by square rooting both sides.0874

We are just building on all the things we have learned so far.0881

We know population.0888

We know how to do the SDOM.0889

We are going to use two SDOM in order to create a sampling distribution of differences.0891

Let us talk about the rules of the SDOD and these are going to be very, very similar to the CLT.0898

The first thing is this, if SDOM for x and SDOM for y are both normal then the SDOD is going to be normal too.0909

Think about when these are normal?0919

These are normal if your population is normal.0922

That is one case where it is normal.0924

This is also normal when n is large.0927

In certain cases, you can assume that the SDOM is normal, and if both of these have met those conditions,0929

then you can assume that the SDOD is normal too.0939

We have conditions where we can assume it is normal and they are not crazy.0942

There are things we have learned.0949

What about the mean?0951

It is always shape, center, spread.0953

What about the mean for the SDOD?0956

That is going to be characterized by mu sub x bar - y bar.0959

That is the idea.0972

Let us consider the null hypothesis and in the null hypothesis usually the idea is they are not different like nothing stands out.0975

Y does not stand out from x and x does not stand out from y.0987

That means we are saying very similar.0991

If that is the case we are saying is that when we take x bar – y bar and do it over and over again, on average, the difference should be 0.0994

Sometimes the difference will be positive.1009

Sometimes the difference will be negative.1012

But if x and y are roughly the same then we should actually get a difference of 0 on average.1014

For the null hypothesis that is 0.1022

The so what would be the alternative hypothesis?1027

Something like the mean of the SDOD is not 0.1031

This is in the case where x and y assume to be same.1037

That is always with the null hypothesis.1051

They assume to be the same.1055

They are not significantly different from each other.1056

That is the mean of the SDOD.1058

What about standard error?1062

In order to calculate standard error, you have to know whether these are independent samples or not.1064

Remember to go back to sampling, independent samples is where you know that these two1073

come from different populations and the picking one does not change the probabilities of picking the other.1079

As long as these are independent samples, then you can use these ideas of the standard error.1089

As we said before, it is easier when I think about the variance of the SDOD first because that rule is quite easy.1096

The variance of SDOD, so the variance is going to be just the variance of the SDOM + the variance of the SDOM for the other guy.1105

And notice that these are the x bars and the y bars.1121

These are for the SDOM they are not for the populations nor the samples.1131

From here what you can do is sort of justice derive the standard error formula.1137

We can just square root both sides.1149

If you wanted to just get standard error, then it would just be the square root of adding each of these variances together.1153

Let us say you double-click on this guy, what is inside of him?1168

He is like a stand in for just the more detailed idea of s sub x / n sub x.1175

Remember when we talk about standard error we are talking about standard error = s / √n.1193

The variance of the SDOM =s2 /n.1205

If you imagine squaring this you would get s/n but we need the variance.1210

We need to add the variances together before you square root them.1220

Here we have the variance of y / n sub y.1224

You could write it either like this or like this.1235

They mean the same thing.1240

They are perfectly equivalent.1242

You do have to remember that when you have this all under the square root sign,1244

the square root sign acts like a parentheses so you have to do all of this before you square root.1253

That is standard error.1261

I know it looks a little complicated, but they are just all the principles we learned before,1265

but now we have to remember does it come from x or does come from y distributions.1273

That is one of the few things you have to ask yourself whenever we deal with two samples.1279

Now that we know the revised CLT for this sampling distribution of the differences,1287

now we need to ask when can we construct a confidence interval for the difference between two means?1298

Actually these conditions are very similar to the conditions that must be met when we construct an SDOM.1306

There are a couple of differences because we are dealing with two samples.1314

The three conditions have to be met.1318

All three of these have to be checked.1321

One is independence, the notion of independence.1323

The first is this, the two samples we are randomly and independently selected from two different populations.1329

That is the first thing you have to meet before you can construct this confidence interval.1340

The second thing is this, this is the assumption for normality.1348

How do we know that the SDOD is normal.1355

It needs to be reasonable to assume that both populations that the sample comes from the population are normal or your sample size is sufficiently large.1358

These are the same ones that apply to the CLT.1372

This is the case where we can assume normality for the SDOM but also the SDOD.1376

In number 3, in the case of sample surveys the population size should be at least 10 times larger than the sample size for each sample.1384

The only reason for this is we talked before about replacement, a sampling with replacement versus sampling not with replacement.1397

Well, whenever you are doing a sample you are technically not having replacement1409

but if your population is large enough then this condition actually makes it so that you could assume that it works pretty much like with replacement.1413

If you have many people then it does not matter.1427

That is the replacement rule.1430

Finally, we could get to actually finding the confidence interval.1433

Here is the deal, with confidence interval let us just review how we used to do it for one mean.1444

One mean confidence interval.1450

Back in the day when we did one mean and life was nice and what we would do is often take the SDOM1455

and assume that the x bar, the sample mean is at the center of it and then we construct something like 95% confidence interval.1466

These are .025 because if this is 95% and symmetrical there is 5% leftover but it needs to be divided on both sides.1484

What we did was we found these boundary values by using this idea, this middle + or – how many standard errors you are away.1496

We used either t or z.1525

I’m just going to use t from now on because usually we are not given the standard deviation of the population × the standard error.1529

That was the basic idea from before and that would give us this value, as well as this value.1530

We could say we have 95% confidence that the population mean falls in between these boundaries.1537

That is for one mean.1545

What about two means?1548

In this case, we are not going to be calculating using the SDOM anymore.1549

We are going to use the SDOD.1560

If this mean is going to be x bar, this sample mean then you can probably assume that1562

it might be something as simple as a difference between the two means.1575

That is what we assume to be the center of the SDOD.1580

Just like before, whatever level of confidence you need.1583

If it is 99% you have 1% left over on the side.1593

You have to divide that 1% in half so .5% for the side and .5% for that side.1598

In this case, let us just keep the 95%.1603

What we need to do is find these borders.1611

What we can to just use the exact same idea again.1618

We could use that exact same idea because we can find the standard error of this distribution.1624

We know what that is.1629

Let me write this out.1631

We will write s sub x bar.1640

We can actually just translate these ideas into something like this.1645

That would be taking this, adding or subtracting how many jumps away you are, like the distance you are away.1652

That would be something like x bar - y bar but instead of just having x in the middle we have this thing in the middle.1661

+ or – the t remains the same, t distributions but we have to talk about how to find degrees of freedom for this guy.1670

The new SE, but now this is the SE of the difference.1680

How do we write that?1691

X bar - y bar + or - the t × s sub x bar = y bar.1694

If we wanted to we could take all that out into the square root of variance of the SDOM for x and variance of SDOM for y.1707

We could unpack all of this if we need to but this is the basic idea of the confidence interval of two means.1719

In order to do this I want you to notice something.1727

Here we need to find t and because we need to find t we need to find degrees of freedom1732

but not just any all degrees of freedom because right now we have 2 degrees of freedom.1740

Degrees of freedom for x and degrees of freedom for y.1744

We need a degrees of freedom for the difference.1747

That is what we need.1751

Let us figure out how to do that.1753

We need to find degrees of freedom.1756

We know how to find degrees of freedom for x, that is straightforward.1760

That is n sub x -1 and degrees of freedom for y is just going to be n sub y -1.1764

Life is good.1771

Life is easy.1772

How do we find the degrees of freedom for the difference between x and y?1773

That is actually going to just be the degrees of freedom for x + degrees of freedom for y.1778

We just add them together.1790

If we want to unpack this, if you think about double-clicking on this and get that.1792

N sub x - 1 + n sub y -1.1797

I am just putting that parentheses as you could see the natural groupings but obviously you could1804

do them in any order because you could just do them straight across this adding and subtracting.1810

They all have the same order of operation.1816

That is degrees of freedom and once you have that then you can easily find the t.1820

Look it up in the back of your book or you can do it in Excel.1830

Let us interpret confidence interval.1833

We have the confidence interval let us think about how to say what we have found.1837

I am just going to briefly draw that picture again because this picture anchors my thinking.1844

Here is our difference of means.1852

When you look at this t, think of this as the difference of two means.1858

I guess I could write DOTM but that would just be DOM.1863

Here what we found, if we find something like a 95% confidence interval that means we have found these boundaries.1869

We say something like this.1887

The actual difference of the two means of the real population, of the population x and y.1891

The real population that they come from should be within this interval 95% of the time or something like1919

we have 95% confidence that the actual difference between means of the population of x and population of y should be within this interval.1939

That comes from that notion that this is created from the SDOM.1950

Remember the SDOM, the CLT says that their means or the means of the population.1955

We are getting the population means drop down to the SDOM and from the SDOM we get this.1962

Because of that we could actually make a conclusion that goes back to the population.1970

Let us think about if 0 is not in between here.1980

Remember the null hypothesis when we think about two means is going to be something like this.1987

That the mu sub x bar – y bar is going to be equal to 0.1993

This is going to mean that on average when you subtract these two things the average is going to be 0.1998

There is going to be no difference on average.2004

The alternative hypothesis should then be the mean of these differences should not be 0.2006

They are different.2015

If 0 is not within this confidence interval then we have very little reason to suspect that this would be true.2016

It is a very little reason to think that this null hypothesis is true.2026

We could also say that if we do not find 0 in our confidence interval that we might in my hypothesis testing be able to also reject the null hypothesis.2030

But we will get to that later.2040

I just wanted to show you this because the confidence interval here is very tightly linked to the hypothesis testing part.2042

They are like two side of the same coin.2050

That universe is fairly straightforward but I feel like I need to cover one other thing because sometimes this is emphasized in some books.2052

Some teachers emphasize this over other teachers and so I'm going to talk to you about SPOOL because this will come up.2065

One of the things I hope you noticed was that in order to find our estimate of SDOM,2076

in order to find the SDOD sample error what we did was we took the variance of one SDOM2085

and added that to the variance of the other SDOM and square root the whole thing.2106

Let me just write that here.2110

The s sub x bar - y bar is the square root of one the variances + the variance of the other SDOM.2111

Here what we did was let us just treat them separately and then combine them together.2129

That is what we did.2137

Although this is an okay way of doing it, in doing this we are assuming that they might have different standard deviations.2138

The two different populations might have two different standard deviations.2154

Normally, that is a reasonable assumption to make.2159

Very few populations have the exact standard deviation.2162

For the vast majority of time because we just assumed if you come from two different population you probably have two different standard deviations.2166

This is pretty reasonable to do like 98% of the time.2177

The vast majority of time.2182

But it is actually is not as good as the estimate of this value then, if you had just used up a POOL version of the standard deviation.2184

Here is what I mean.2198

Now we are saying, we are going to create the standard deviation of x.2198

You are going to be what we used to create the standard deviation of y.2206

Just of not make that explicit.2210

I am going to write this out so that you could actually see the variance of x and the variance of y.2213

We use x to create this guy and we use y to create that guy and they remain separate.2228

This is going to take a little reasoning.2235

Think back if you have more data then your estimate of the population standard deviation is better, more data more accurate.2239

Would not it be nice if we took all the guys from the x pool and all the guys from the y pull and put them together.2253

Together let us estimate the standard deviation.2262

Would not that be nice?2267

Then we will have more data and more data should give us a more accurate estimate of the population.2268

You can do that but only in the case that you have reason to think that the population of x has a similar standard deviation to the population of y.2278

If you have a reason to think they are both normally distributed.2293

Let us say something like this.2299

If you have reason to believe that the population x and y have similar standard deviation2303

then you can pull samples together to estimate standard deviation.2324

You can pull them together and that is going to be called spull.2347

There are very few populations that you can do this for.2351

One thing something like height of males and females, height tends to be normally distributed and we know that.2357

Height of Asians and Latinos or something, but there are a lot of examples that come to mind where you could do this.2365

That is why some teachers do not emphasize it but I know that some others do so.2374

That is why I want to definitely go over it.2378

How do you get spull and where does it come in?2380

Here is the thing, in order to find Spull, what we would do is we would substitute in spull for s sub x and s sub y.2384

Instead of two separate estimates of standard deviations use Spull.2396

We will be using Spull2.2408

How do we find Spull2?2411

In order to find Spull2, what you would do is you would add up all of the sum of squares.2415

The sum of squares of x and sum of squares of y, add them together and then divide by the sum of all the degrees of freedom.2432

If I double-click on this, this would mean the sum of squares of x + the sum of squares of y ÷ degrees of freedom x + degrees of freedom y.2442

This is what you need only to do in order to find Spull and then what you would do is substitute in s(x)2 and s sub y2.2457

That is the deal.2469

In the examples that are going to follow, I am not going to use Spull because there is very little reason usually to assume that we can use Spull.2471

And but a lot of times you might hear this phrase assumption of homogeneity of variance.2483

If you could assume that these guys have a similar variance, if you can assume2490

they have similar homogeneous variance then you can use Spull.2502

For the most part, for the vast majority of time you cannot assume homogenous variance.2508

Because of that we will often use this one.2514

However, I should say that some teachers do want you to be able to calculate both.2517

That is the only thing.2525

Finally I should just say one thing.2528

Usually this works just as well as pull.2531

It is just that there are sometimes we get more of a benefit from using this one.2536

If worse comes to worse, and after the statistics class you are only remember this one.2543

If not all you are pretty good to go.2548

Let us go on to some examples.2551

A random sample of American college students was collected to examine quantitative literacy.2556

How good they are in reasoning about quantitative ideas.2562

The survey sampled 1,000 students from four-year institutions, this was the mean and standard deviation.2565

800 from two-year institutions, here is the mean and standard deviations.2571

Are the conditions for confidence intervals met?2576

Also construct a 95% confidence interval and interpret it.2581

Let us think about the confidence interval requirements.2586

First is independent random samples.2593

It does say random sample right and these are independent populations.2596

One is for your institutions, one is to your institutions.2603

There are very few people going to both of them at the same time.2606

First one, check.2609

Second one, can we assume normality either because of the large n or because we know that both these populations are originally normally distributed?2612

Well, they have pretty large n, so I am going to say number 2 check.2622

Number 3, is this sample roughly sampling with replacement?2627

And although 1000 students seem a lot, there are a lot of college students.2635

I am pretty sure that this meets that qualification as well.2640

Go ahead and construct the 95% confidence interval.2643

Well, it helped to start off with the drawing of SDOD just to anchor my thinking.2648

And this mu sub x bar - y bar we could assume that this is x bar - y bar.2656

That is what we do with confidence intervals.2667

We use what we have from the samples to figure out what the population might be.2670

We want to construct a 95% confidence interval.2678

That is going to be .025 and then maybe it will help us to figure out the degrees of freedom so that we will know the t value to use.2685

Let us figure out degrees of freedom.2703

It is going to be the degrees of freedom for x and I will call x the four-year university guys and the degrees of freedom for y the two-year university guys.2706

That is going to be 999 + 799 and so it is going to be 1800 - 2 = 1798.2718

We have quite large degrees of freedom and let us find the t for this place.2747

We need to find is this and this.2755

Let us find the t first.2760

This is the raw score, this is the t, and let me delete some of the stuff.2765

I will just put x bar - y bar in there and we can find that later.2772

The t is going to be the boundaries for this guy and the boundaries for this guy.2782

What is our t value?2788

You can look it up in the back of your book or you could do it in Excel.2790

Here we want to put in the t in because we have the probability and remember this one2799

wants two tailed probability .05 and the degrees of freedom which is 1798 = 1.896.2806

We will put 1.961 just to distinguish it.2819

Let us write down our confidence interval formula and see what we can do.2831

Confidence interval is going to be x bar - y bar.2838

The middle of this guy + or - t × standard error of this guy.2844

That is going to be s sub x bar - y bar.2854

It would be probably helpful to find this thing.2858

X bar - y bar.2862

X bar - y bar that is going to be 330 – 310.2868

Let us also try to figure out the standard error of SDOD which is s sub x bar - y bar.2883

What I'm trying to do is find this guy.2911

In order to find that guy let us think about the formula.2918

I'm just writing this for myself.2921

The square root of the variance of x bar + the variance of y bar .2925

We do not have the variance of x bar and y bar.2937

Let us think about how to find the variance of x bar.2943

The variance of x bar is going to be s sub s2 ÷ n sub x.2947

The variance of y bar is going to be the variance of y2 ÷ n sub y.2959

I wanted to write all these things out just because I need to get to a place where finally I can put in s.2977

Finally, I can do that.2986

This is s sub x and this is s sub y.2988

I can put in 1112 ÷ n sub x which is 1000 and I could put in the standard deviation of y2 ÷ 800.2990

I have these two things and what I need to do is go back up here and add these and square root them.3017

Square root this + this.3028

I know that this equal that.3034

We have our standard error, which is 4.49 and this is 20 + or - 1.961.3038

Now I could do this.3064

I will going to take that in my calculator as well.3066

The confidence interval for the high boundary is going to be 20 + 1.961 × 4.493069

and the confidence interval for the low boundary is going to be that same thing.3085

I am just going to change that into subtraction.3097

11.20.3101

Let me move this over.3105

It is going to be 28.8.3110

Let me get the low end first.3117

The confidence interval is from about 11.2 through 28.8.3121

We have to interpret it.3127

This is the hardest part for a lot of people.3130

We have to say something like this.3133

The true difference between the population means 95% of the time is going to fall in between these two numbers.3136

Or we have 95% confidence that the true difference between the two population means fall in between these two numbers.3146

Let us go to example 2.3154

This will be our last example.3157

If the sample size of both samples are the same, what would be the simplified formula for standard error of the difference?3159

If in addition, the standard deviation of both samples are the same, what would be the simplified formula for standard error of the difference?3167

This is just asking depending on how similar the two examples are can we simplify a formula for standard error.3175

We can.3183

Let us write the actual formula out so that would just x bar – y bar = square root of the variance of x bar + variance of y bar.3184

If we double-click on these guys that would give the variance of x / n sub x + the variance of y / n sub y.3207

It is asking, what if the sample size for both samples are the same?3223

What would be the simplified formula?3230

That is saying that if n sub x = n sub y then what would be this?3231

We can get the variance of x + variance of y / n.3240

Because the n for each of them should be the same.3251

This would make it a lot simpler.3254

If in addition a standard deviation of both samples are the same right then this would mean that3260

because the standard deviation is the same then the variances are the same.3272

That would be that case.3276

If in addition this was the case, then you would just get 2 × s2 whatever the equal variances /n.3279

That would make it a simple formula.3294

That would make life a lot easier but that is not always the case.3298

If it is you know that it will be simple for you.3303

That is it for the confidence intervals for the difference between two means.3307

Thank you for using www.educator.com.3312

Educator®

Please sign in to participate in this lecture discussion.

Resetting Your Password?
OR

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Membership Overview

  • Available 24/7. Unlimited Access to Our Entire Library.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lecture slides for taking notes.
  • Track your course viewing progress.
  • Accessible anytime, anywhere with our Android and iOS apps.