Dr. Ji Son

Dr. Ji Son

Introduction to Hypothesis Testing

Slide Duration:

Table of Contents

Section 1: Introduction
Descriptive Statistics vs. Inferential Statistics

25m 31s

Intro
0:00
Roadmap
0:10
Roadmap
0:11
Statistics
0:35
Statistics
0:36
Let's Think About High School Science
1:12
Measurement and Find Patterns (Mathematical Formula)
1:13
Statistics = Math of Distributions
4:58
Distributions
4:59
Problematic… but also GREAT
5:58
Statistics
7:33
How is It Different from Other Specializations in Mathematics?
7:34
Statistics is Fundamental in Natural and Social Sciences
7:53
Two Skills of Statistics
8:20
Description (Exploration)
8:21
Inference
9:13
Descriptive Statistics vs. Inferential Statistics: Apply to Distributions
9:58
Descriptive Statistics
9:59
Inferential Statistics
11:05
Populations vs. Samples
12:19
Populations vs. Samples: Is it the Truth?
12:20
Populations vs. Samples: Pros & Cons
13:36
Populations vs. Samples: Descriptive Values
16:12
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:10
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:11
Example 1: Descriptive Statistics vs. Inferential Statistics
19:09
Example 2: Descriptive Statistics vs. Inferential Statistics
20:47
Example 3: Sample, Parameter, Population, and Statistic
21:40
Example 4: Sample, Parameter, Population, and Statistic
23:28
Section 2: About Samples: Cases, Variables, Measurements
About Samples: Cases, Variables, Measurements

32m 14s

Intro
0:00
Data
0:09
Data, Cases, Variables, and Values
0:10
Rows, Columns, and Cells
2:03
Example: Aircrafts
3:52
How Do We Get Data?
5:38
Research: Question and Hypothesis
5:39
Research Design
7:11
Measurement
7:29
Research Analysis
8:33
Research Conclusion
9:30
Types of Variables
10:03
Discrete Variables
10:04
Continuous Variables
12:07
Types of Measurements
14:17
Types of Measurements
14:18
Types of Measurements (Scales)
17:22
Nominal
17:23
Ordinal
19:11
Interval
21:33
Ratio
24:24
Example 1: Cases, Variables, Measurements
25:20
Example 2: Which Scale of Measurement is Used?
26:55
Example 3: What Kind of a Scale of Measurement is This?
27:26
Example 4: Discrete vs. Continuous Variables.
30:31
Section 3: Visualizing Distributions
Introduction to Excel

8m 9s

Intro
0:00
Before Visualizing Distribution
0:10
Excel
0:11
Excel: Organization
0:45
Workbook
0:46
Column x Rows
1:50
Tools: Menu Bar, Standard Toolbar, and Formula Bar
3:00
Excel + Data
6:07
Exce and Data
6:08
Frequency Distributions in Excel

39m 10s

Intro
0:00
Roadmap
0:08
Data in Excel and Frequency Distributions
0:09
Raw Data to Frequency Tables
0:42
Raw Data to Frequency Tables
0:43
Frequency Tables: Using Formulas and Pivot Tables
1:28
Example 1: Number of Births
7:17
Example 2: Age Distribution
20:41
Example 3: Height Distribution
27:45
Example 4: Height Distribution of Males
32:19
Frequency Distributions and Features

25m 29s

Intro
0:00
Roadmap
0:10
Data in Excel, Frequency Distributions, and Features of Frequency Distributions
0:11
Example #1
1:35
Uniform
1:36
Example #2
2:58
Unimodal, Skewed Right, and Asymmetric
2:59
Example #3
6:29
Bimodal
6:30
Example #4a
8:29
Symmetric, Unimodal, and Normal
8:30
Point of Inflection and Standard Deviation
11:13
Example #4b
12:43
Normal Distribution
12:44
Summary
13:56
Uniform, Skewed, Bimodal, and Normal
13:57
Sketch Problem 1: Driver's License
17:34
Sketch Problem 2: Life Expectancy
20:01
Sketch Problem 3: Telephone Numbers
22:01
Sketch Problem 4: Length of Time Used to Complete a Final Exam
23:43
Dotplots and Histograms in Excel

42m 42s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Previously
1:02
Data, Frequency Table, and visualization
1:03
Dotplots
1:22
Dotplots Excel Example
1:23
Dotplots: Pros and Cons
7:22
Pros and Cons of Dotplots
7:23
Dotplots Excel Example Cont.
9:07
Histograms
12:47
Histograms Overview
12:48
Example of Histograms
15:29
Histograms: Pros and Cons
31:39
Pros
31:40
Cons
32:31
Frequency vs. Relative Frequency
32:53
Frequency
32:54
Relative Frequency
33:36
Example 1: Dotplots vs. Histograms
34:36
Example 2: Age of Pennies Dotplot
36:21
Example 3: Histogram of Mammal Speeds
38:27
Example 4: Histogram of Life Expectancy
40:30
Stemplots

12m 23s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
What Sets Stemplots Apart?
0:46
Data Sets, Dotplots, Histograms, and Stemplots
0:47
Example 1: What Do Stemplots Look Like?
1:58
Example 2: Back-to-Back Stemplots
5:00
Example 3: Quiz Grade Stemplot
7:46
Example 4: Quiz Grade & Afterschool Tutoring Stemplot
9:56
Bar Graphs

22m 49s

Intro
0:00
Roadmap
0:05
Roadmap
0:08
Review of Frequency Distributions
0:44
Y-axis and X-axis
0:45
Types of Frequency Visualizations Covered so Far
2:16
Introduction to Bar Graphs
4:07
Example 1: Bar Graph
5:32
Example 1: Bar Graph
5:33
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:07
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:08
Example 2: Create a Frequency Visualization for Gender
14:02
Example 3: Cases, Variables, and Frequency Visualization
16:34
Example 4: What Kind of Graphs are Shown Below?
19:29
Section 4: Summarizing Distributions
Central Tendency: Mean, Median, Mode

38m 50s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
Central Tendency 1
0:56
Way to Summarize a Distribution of Scores
0:57
Mode
1:32
Median
2:02
Mean
2:36
Central Tendency 2
3:47
Mode
3:48
Median
4:20
Mean
5:25
Summation Symbol
6:11
Summation Symbol
6:12
Population vs. Sample
10:46
Population vs. Sample
10:47
Excel Examples
15:08
Finding Mode, Median, and Mean in Excel
15:09
Median vs. Mean
21:45
Effect of Outliers
21:46
Relationship Between Parameter and Statistic
22:44
Type of Measurements
24:00
Which Distributions to Use With
24:55
Example 1: Mean
25:30
Example 2: Using Summation Symbol
29:50
Example 3: Average Calorie Count
32:50
Example 4: Creating an Example Set
35:46
Variability

42m 40s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Variability (or Spread)
0:45
Variability (or Spread)
0:46
Things to Think About
5:45
Things to Think About
5:46
Range, Quartiles and Interquartile Range
6:37
Range
6:38
Interquartile Range
8:42
Interquartile Range Example
10:58
Interquartile Range Example
10:59
Variance and Standard Deviation
12:27
Deviations
12:28
Sum of Squares
14:35
Variance
16:55
Standard Deviation
17:44
Sum of Squares (SS)
18:34
Sum of Squares (SS)
18:35
Population vs. Sample SD
22:00
Population vs. Sample SD
22:01
Population vs. Sample
23:20
Mean
23:21
SD
23:51
Example 1: Find the Mean and Standard Deviation of the Variable Friends in the Excel File
27:21
Example 2: Find the Mean and Standard Deviation of the Tagged Photos in the Excel File
35:25
Example 3: Sum of Squares
38:58
Example 4: Standard Deviation
41:48
Five Number Summary & Boxplots

57m 15s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Summarizing Distributions
0:37
Shape, Center, and Spread
0:38
5 Number Summary
1:14
Boxplot: Visualizing 5 Number Summary
3:37
Boxplot: Visualizing 5 Number Summary
3:38
Boxplots on Excel
9:01
Using 'Stocks' and Using Stacked Columns
9:02
Boxplots on Excel Example
10:14
When are Boxplots Useful?
32:14
Pros
32:15
Cons
32:59
How to Determine Outlier Status
33:24
Rule of Thumb: Upper Limit
33:25
Rule of Thumb: Lower Limit
34:16
Signal Outliers in an Excel Data File Using Conditional Formatting
34:52
Modified Boxplot
48:38
Modified Boxplot
48:39
Example 1: Percentage Values & Lower and Upper Whisker
49:10
Example 2: Boxplot
50:10
Example 3: Estimating IQR From Boxplot
53:46
Example 4: Boxplot and Missing Whisker
54:35
Shape: Calculating Skewness & Kurtosis

41m 51s

Intro
0:00
Roadmap
0:16
Roadmap
0:17
Skewness Concept
1:09
Skewness Concept
1:10
Calculating Skewness
3:26
Calculating Skewness
3:27
Interpreting Skewness
7:36
Interpreting Skewness
7:37
Excel Example
8:49
Kurtosis Concept
20:29
Kurtosis Concept
20:30
Calculating Kurtosis
24:17
Calculating Kurtosis
24:18
Interpreting Kurtosis
29:01
Leptokurtic
29:35
Mesokurtic
30:10
Platykurtic
31:06
Excel Example
32:04
Example 1: Shape of Distribution
38:28
Example 2: Shape of Distribution
39:29
Example 3: Shape of Distribution
40:14
Example 4: Kurtosis
41:10
Normal Distribution

34m 33s

Intro
0:00
Roadmap
0:13
Roadmap
0:14
What is a Normal Distribution
0:44
The Normal Distribution As a Theoretical Model
0:45
Possible Range of Probabilities
3:05
Possible Range of Probabilities
3:06
What is a Normal Distribution
5:07
Can Be Described By
5:08
Properties
5:49
'Same' Shape: Illusion of Different Shape!
7:35
'Same' Shape: Illusion of Different Shape!
7:36
Types of Problems
13:45
Example: Distribution of SAT Scores
13:46
Shape Analogy
19:48
Shape Analogy
19:49
Example 1: The Standard Normal Distribution and Z-Scores
22:34
Example 2: The Standard Normal Distribution and Z-Scores
25:54
Example 3: Sketching and Normal Distribution
28:55
Example 4: Sketching and Normal Distribution
32:32
Standard Normal Distributions & Z-Scores

41m 44s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
A Family of Distributions
0:28
Infinite Set of Distributions
0:29
Transforming Normal Distributions to 'Standard' Normal Distribution
1:04
Normal Distribution vs. Standard Normal Distribution
2:58
Normal Distribution vs. Standard Normal Distribution
2:59
Z-Score, Raw Score, Mean, & SD
4:08
Z-Score, Raw Score, Mean, & SD
4:09
Weird Z-Scores
9:40
Weird Z-Scores
9:41
Excel
16:45
For Normal Distributions
16:46
For Standard Normal Distributions
19:11
Excel Example
20:24
Types of Problems
25:18
Percentage Problem: P(x)
25:19
Raw Score and Z-Score Problems
26:28
Standard Deviation Problems
27:01
Shape Analogy
27:44
Shape Analogy
27:45
Example 1: Deaths Due to Heart Disease vs. Deaths Due to Cancer
28:24
Example 2: Heights of Male College Students
33:15
Example 3: Mean and Standard Deviation
37:14
Example 4: Finding Percentage of Values in a Standard Normal Distribution
37:49
Normal Distribution: PDF vs. CDF

55m 44s

Intro
0:00
Roadmap
0:15
Roadmap
0:16
Frequency vs. Cumulative Frequency
0:56
Frequency vs. Cumulative Frequency
0:57
Frequency vs. Cumulative Frequency
4:32
Frequency vs. Cumulative Frequency Cont.
4:33
Calculus in Brief
6:21
Derivative-Integral Continuum
6:22
PDF
10:08
PDF for Standard Normal Distribution
10:09
PDF for Normal Distribution
14:32
Integral of PDF = CDF
21:27
Integral of PDF = CDF
21:28
Example 1: Cumulative Frequency Graph
23:31
Example 2: Mean, Standard Deviation, and Probability
24:43
Example 3: Mean and Standard Deviation
35:50
Example 4: Age of Cars
49:32
Section 5: Linear Regression
Scatterplots

47m 19s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
Previous Visualizations
0:30
Frequency Distributions
0:31
Compare & Contrast
2:26
Frequency Distributions Vs. Scatterplots
2:27
Summary Values
4:53
Shape
4:54
Center & Trend
6:41
Spread & Strength
8:22
Univariate & Bivariate
10:25
Example Scatterplot
10:48
Shape, Trend, and Strength
10:49
Positive and Negative Association
14:05
Positive and Negative Association
14:06
Linearity, Strength, and Consistency
18:30
Linearity
18:31
Strength
19:14
Consistency
20:40
Summarizing a Scatterplot
22:58
Summarizing a Scatterplot
22:59
Example 1: Gapminder.org, Income x Life Expectancy
26:32
Example 2: Gapminder.org, Income x Infant Mortality
36:12
Example 3: Trend and Strength of Variables
40:14
Example 4: Trend, Strength and Shape for Scatterplots
43:27
Regression

32m 2s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Linear Equations
0:34
Linear Equations: y = mx + b
0:35
Rough Line
5:16
Rough Line
5:17
Regression - A 'Center' Line
7:41
Reasons for Summarizing with a Regression Line
7:42
Predictor and Response Variable
10:04
Goal of Regression
12:29
Goal of Regression
12:30
Prediction
14:50
Example: Servings of Mile Per Year Shown By Age
14:51
Intrapolation
17:06
Extrapolation
17:58
Error in Prediction
20:34
Prediction Error
20:35
Residual
21:40
Example 1: Residual
23:34
Example 2: Large and Negative Residual
26:30
Example 3: Positive Residual
28:13
Example 4: Interpret Regression Line & Extrapolate
29:40
Least Squares Regression

56m 36s

Intro
0:00
Roadmap
0:13
Roadmap
0:14
Best Fit
0:47
Best Fit
0:48
Sum of Squared Errors (SSE)
1:50
Sum of Squared Errors (SSE)
1:51
Why Squared?
3:38
Why Squared?
3:39
Quantitative Properties of Regression Line
4:51
Quantitative Properties of Regression Line
4:52
So How do we Find Such a Line?
6:49
SSEs of Different Line Equations & Lowest SSE
6:50
Carl Gauss' Method
8:01
How Do We Find Slope (b1)
11:00
How Do We Find Slope (b1)
11:01
Hoe Do We Find Intercept
15:11
Hoe Do We Find Intercept
15:12
Example 1: Which of These Equations Fit the Above Data Best?
17:18
Example 2: Find the Regression Line for These Data Points and Interpret It
26:31
Example 3: Summarize the Scatterplot and Find the Regression Line.
34:31
Example 4: Examine the Mean of Residuals
43:52
Correlation

43m 58s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Summarizing a Scatterplot Quantitatively
0:47
Shape
0:48
Trend
1:11
Strength: Correlation ®
1:45
Correlation Coefficient ( r )
2:30
Correlation Coefficient ( r )
2:31
Trees vs. Forest
11:59
Trees vs. Forest
12:00
Calculating r
15:07
Average Product of z-scores for x and y
15:08
Relationship between Correlation and Slope
21:10
Relationship between Correlation and Slope
21:11
Example 1: Find the Correlation between Grams of Fat and Cost
24:11
Example 2: Relationship between r and b1
30:24
Example 3: Find the Regression Line
33:35
Example 4: Find the Correlation Coefficient for this Set of Data
37:37
Correlation: r vs. r-squared

52m 52s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
R-squared
0:44
What is the Meaning of It? Why Squared?
0:45
Parsing Sum of Squared (Parsing Variability)
2:25
SST = SSR + SSE
2:26
What is SST and SSE?
7:46
What is SST and SSE?
7:47
r-squared
18:33
Coefficient of Determination
18:34
If the Correlation is Strong…
20:25
If the Correlation is Strong…
20:26
If the Correlation is Weak…
22:36
If the Correlation is Weak…
22:37
Example 1: Find r-squared for this Set of Data
23:56
Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance?
33:54
Example 3: Why Does r-squared Only Range from 0 to 1
37:29
Example 4: Find the r-squared for This Set of Data
39:55
Transformations of Data

27m 8s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Why Transform?
0:26
Why Transform?
0:27
Shape-preserving vs. Shape-changing Transformations
5:14
Shape-preserving = Linear Transformations
5:15
Shape-changing Transformations = Non-linear Transformations
6:20
Common Shape-Preserving Transformations
7:08
Common Shape-Preserving Transformations
7:09
Common Shape-Changing Transformations
8:59
Powers
9:00
Logarithms
9:39
Change Just One Variable? Both?
10:38
Log-log Transformations
10:39
Log Transformations
14:38
Example 1: Create, Graph, and Transform the Data Set
15:19
Example 2: Create, Graph, and Transform the Data Set
20:08
Example 3: What Kind of Model would You Choose for this Data?
22:44
Example 4: Transformation of Data
25:46
Section 6: Collecting Data in an Experiment
Sampling & Bias

54m 44s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Descriptive vs. Inferential Statistics
1:04
Descriptive Statistics: Data Exploration
1:05
Example
2:03
To tackle Generalization…
4:31
Generalization
4:32
Sampling
6:06
'Good' Sample
6:40
Defining Samples and Populations
8:55
Population
8:56
Sample
11:16
Why Use Sampling?
13:09
Why Use Sampling?
13:10
Goal of Sampling: Avoiding Bias
15:04
What is Bias?
15:05
Where does Bias Come from: Sampling Bias
17:53
Where does Bias Come from: Response Bias
18:27
Sampling Bias: Bias from Bas Sampling Methods
19:34
Size Bias
19:35
Voluntary Response Bias
21:13
Convenience Sample
22:22
Judgment Sample
23:58
Inadequate Sample Frame
25:40
Response Bias: Bias from 'Bad' Data Collection Methods
28:00
Nonresponse Bias
29:31
Questionnaire Bias
31:10
Incorrect Response or Measurement Bias
37:32
Example 1: What Kind of Biases?
40:29
Example 2: What Biases Might Arise?
44:46
Example 3: What Kind of Biases?
48:34
Example 4: What Kind of Biases?
51:43
Sampling Methods

14m 25s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Biased vs. Unbiased Sampling Methods
0:32
Biased Sampling
0:33
Unbiased Sampling
1:13
Probability Sampling Methods
2:31
Simple Random
2:54
Stratified Random Sampling
4:06
Cluster Sampling
5:24
Two-staged Sampling
6:22
Systematic Sampling
7:25
Example 1: Which Type(s) of Sampling was this?
8:33
Example 2: Describe How to Take a Two-Stage Sample from this Book
10:16
Example 3: Sampling Methods
11:58
Example 4: Cluster Sample Plan
12:48
Research Design

53m 54s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Descriptive vs. Inferential Statistics
0:51
Descriptive Statistics: Data Exploration
0:52
Inferential Statistics
1:02
Variables and Relationships
1:44
Variables
1:45
Relationships
2:49
Not Every Type of Study is an Experiment…
4:16
Category I - Descriptive Study
4:54
Category II - Correlational Study
5:50
Category III - Experimental, Quasi-experimental, Non-experimental
6:33
Category III
7:42
Experimental, Quasi-experimental, and Non-experimental
7:43
Why CAN'T the Other Strategies Determine Causation?
10:18
Third-variable Problem
10:19
Directionality Problem
15:49
What Makes Experiments Special?
17:54
Manipulation
17:55
Control (and Comparison)
21:58
Methods of Control
26:38
Holding Constant
26:39
Matching
29:11
Random Assignment
31:48
Experiment Terminology
34:09
'true' Experiment vs. Study
34:10
Independent Variable (IV)
35:16
Dependent Variable (DV)
35:45
Factors
36:07
Treatment Conditions
36:23
Levels
37:43
Confounds or Extraneous Variables
38:04
Blind
38:38
Blind Experiments
38:39
Double-blind Experiments
39:29
How Categories Relate to Statistics
41:35
Category I - Descriptive Study
41:36
Category II - Correlational Study
42:05
Category III - Experimental, Quasi-experimental, Non-experimental
42:43
Example 1: Research Design
43:50
Example 2: Research Design
47:37
Example 3: Research Design
50:12
Example 4: Research Design
52:00
Between and Within Treatment Variability

41m 31s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Experimental Designs
0:51
Experimental Designs: Manipulation & Control
0:52
Two Types of Variability
2:09
Between Treatment Variability
2:10
Within Treatment Variability
3:31
Updated Goal of Experimental Design
5:47
Updated Goal of Experimental Design
5:48
Example: Drugs and Driving
6:56
Example: Drugs and Driving
6:57
Different Types of Random Assignment
11:27
All Experiments
11:28
Completely Random Design
12:02
Randomized Block Design
13:19
Randomized Block Design
15:48
Matched Pairs Design
15:49
Repeated Measures Design
19:47
Between-subject Variable vs. Within-subject Variable
22:43
Completely Randomized Design
22:44
Repeated Measures Design
25:03
Example 1: Design a Completely Random, Matched Pair, and Repeated Measures Experiment
26:16
Example 2: Block Design
31:41
Example 3: Completely Randomized Designs
35:11
Example 4: Completely Random, Matched Pairs, or Repeated Measures Experiments?
39:01
Section 7: Review of Probability Axioms
Sample Spaces

37m 52s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
Why is Probability Involved in Statistics
0:48
Probability
0:49
Can People Tell the Difference between Cheap and Gourmet Coffee?
2:08
Taste Test with Coffee Drinkers
3:37
If No One can Actually Taste the Difference
3:38
If Everyone can Actually Taste the Difference
5:36
Creating a Probability Model
7:09
Creating a Probability Model
7:10
D'Alembert vs. Necker
9:41
D'Alembert vs. Necker
9:42
Problem with D'Alembert's Model
13:29
Problem with D'Alembert's Model
13:30
Covering Entire Sample Space
15:08
Fundamental Principle of Counting
15:09
Where Do Probabilities Come From?
22:54
Observed Data, Symmetry, and Subjective Estimates
22:55
Checking whether Model Matches Real World
24:27
Law of Large Numbers
24:28
Example 1: Law of Large Numbers
27:46
Example 2: Possible Outcomes
30:43
Example 3: Brands of Coffee and Taste
33:25
Example 4: How Many Different Treatments are there?
35:33
Addition Rule for Disjoint Events

20m 29s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Disjoint Events
0:41
Disjoint Events
0:42
Meaning of 'or'
2:39
In Regular Life
2:40
In Math/Statistics/Computer Science
3:10
Addition Rule for Disjoin Events
3:55
If A and B are Disjoint: P (A and B)
3:56
If A and B are Disjoint: P (A or B)
5:15
General Addition Rule
5:41
General Addition Rule
5:42
Generalized Addition Rule
8:31
If A and B are not Disjoint: P (A or B)
8:32
Example 1: Which of These are Mutually Exclusive?
10:50
Example 2: What is the Probability that You will Have a Combination of One Heads and Two Tails?
12:57
Example 3: Engagement Party
15:17
Example 4: Home Owner's Insurance
18:30
Conditional Probability

57m 19s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
'or' vs. 'and' vs. Conditional Probability
1:07
'or' vs. 'and' vs. Conditional Probability
1:08
'and' vs. Conditional Probability
5:57
P (M or L)
5:58
P (M and L)
8:41
P (M|L)
11:04
P (L|M)
12:24
Tree Diagram
15:02
Tree Diagram
15:03
Defining Conditional Probability
22:42
Defining Conditional Probability
22:43
Common Contexts for Conditional Probability
30:56
Medical Testing: Positive Predictive Value
30:57
Medical Testing: Sensitivity
33:03
Statistical Tests
34:27
Example 1: Drug and Disease
36:41
Example 2: Marbles and Conditional Probability
40:04
Example 3: Cards and Conditional Probability
45:59
Example 4: Votes and Conditional Probability
50:21
Independent Events

24m 27s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Independent Events & Conditional Probability
0:26
Non-independent Events
0:27
Independent Events
2:00
Non-independent and Independent Events
3:08
Non-independent and Independent Events
3:09
Defining Independent Events
5:52
Defining Independent Events
5:53
Multiplication Rule
7:29
Previously…
7:30
But with Independent Evens
8:53
Example 1: Which of These Pairs of Events are Independent?
11:12
Example 2: Health Insurance and Probability
15:12
Example 3: Independent Events
17:42
Example 4: Independent Events
20:03
Section 8: Probability Distributions
Introduction to Probability Distributions

56m 45s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Sampling vs. Probability
0:57
Sampling
0:58
Missing
1:30
What is Missing?
3:06
Insight: Probability Distributions
5:26
Insight: Probability Distributions
5:27
What is a Probability Distribution?
7:29
From Sample Spaces to Probability Distributions
8:44
Sample Space
8:45
Probability Distribution of the Sum of Two Die
11:16
The Random Variable
17:43
The Random Variable
17:44
Expected Value
21:52
Expected Value
21:53
Example 1: Probability Distributions
28:45
Example 2: Probability Distributions
35:30
Example 3: Probability Distributions
43:37
Example 4: Probability Distributions
47:20
Expected Value & Variance of Probability Distributions

53m 41s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Discrete vs. Continuous Random Variables
1:04
Discrete vs. Continuous Random Variables
1:05
Mean and Variance Review
4:44
Mean: Sample, Population, and Probability Distribution
4:45
Variance: Sample, Population, and Probability Distribution
9:12
Example Situation
14:10
Example Situation
14:11
Some Special Cases…
16:13
Some Special Cases…
16:14
Linear Transformations
19:22
Linear Transformations
19:23
What Happens to Mean and Variance of the Probability Distribution?
20:12
n Independent Values of X
25:38
n Independent Values of X
25:39
Compare These Two Situations
30:56
Compare These Two Situations
30:57
Two Random Variables, X and Y
32:02
Two Random Variables, X and Y
32:03
Example 1: Expected Value & Variance of Probability Distributions
35:35
Example 2: Expected Values & Standard Deviation
44:17
Example 3: Expected Winnings and Standard Deviation
48:18
Binomial Distribution

55m 15s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Discrete Probability Distributions
1:42
Discrete Probability Distributions
1:43
Binomial Distribution
2:36
Binomial Distribution
2:37
Multiplicative Rule Review
6:54
Multiplicative Rule Review
6:55
How Many Outcomes with k 'Successes'
10:23
Adults and Bachelor's Degree: Manual List of Outcomes
10:24
P (X=k)
19:37
Putting Together # of Outcomes with the Multiplicative Rule
19:38
Expected Value and Standard Deviation in a Binomial Distribution
25:22
Expected Value and Standard Deviation in a Binomial Distribution
25:23
Example 1: Coin Toss
33:42
Example 2: College Graduates
38:03
Example 3: Types of Blood and Probability
45:39
Example 4: Expected Number and Standard Deviation
51:11
Section 9: Sampling Distributions of Statistics
Introduction to Sampling Distributions

48m 17s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Probability Distributions vs. Sampling Distributions
0:55
Probability Distributions vs. Sampling Distributions
0:56
Same Logic
3:55
Logic of Probability Distribution
3:56
Example: Rolling Two Die
6:56
Simulating Samples
9:53
To Come Up with Probability Distributions
9:54
In Sampling Distributions
11:12
Connecting Sampling and Research Methods with Sampling Distributions
12:11
Connecting Sampling and Research Methods with Sampling Distributions
12:12
Simulating a Sampling Distribution
14:14
Experimental Design: Regular Sleep vs. Less Sleep
14:15
Logic of Sampling Distributions
23:08
Logic of Sampling Distributions
23:09
General Method of Simulating Sampling Distributions
25:38
General Method of Simulating Sampling Distributions
25:39
Questions that Remain
28:45
Questions that Remain
28:46
Example 1: Mean and Standard Error of Sampling Distribution
30:57
Example 2: What is the Best Way to Describe Sampling Distributions?
37:12
Example 3: Matching Sampling Distributions
38:21
Example 4: Mean and Standard Error of Sampling Distribution
41:51
Sampling Distribution of the Mean

1h 8m 48s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Special Case of General Method for Simulating a Sampling Distribution
1:53
Special Case of General Method for Simulating a Sampling Distribution
1:54
Computer Simulation
3:43
Using Simulations to See Principles behind Shape of SDoM
15:50
Using Simulations to See Principles behind Shape of SDoM
15:51
Conditions
17:38
Using Simulations to See Principles behind Center (Mean) of SDoM
20:15
Using Simulations to See Principles behind Center (Mean) of SDoM
20:16
Conditions: Does n Matter?
21:31
Conditions: Does Number of Simulation Matter?
24:37
Using Simulations to See Principles behind Standard Deviation of SDoM
27:13
Using Simulations to See Principles behind Standard Deviation of SDoM
27:14
Conditions: Does n Matter?
34:45
Conditions: Does Number of Simulation Matter?
36:24
Central Limit Theorem
37:13
SHAPE
38:08
CENTER
39:34
SPREAD
39:52
Comparing Population, Sample, and SDoM
43:10
Comparing Population, Sample, and SDoM
43:11
Answering the 'Questions that Remain'
48:24
What Happens When We Don't Know What the Population Looks Like?
48:25
Can We Have Sampling Distributions for Summary Statistics Other than the Mean?
49:42
How Do We Know whether a Sample is Sufficiently Unlikely?
53:36
Do We Always Have to Simulate a Large Number of Samples in Order to get a Sampling Distribution?
54:40
Example 1: Mean Batting Average
55:25
Example 2: Mean Sampling Distribution and Standard Error
59:07
Example 3: Sampling Distribution of the Mean
1:01:04
Sampling Distribution of Sample Proportions

54m 37s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Intro to Sampling Distribution of Sample Proportions (SDoSP)
0:51
Categorical Data (Examples)
0:52
Wish to Estimate Proportion of Population from Sample…
2:00
Notation
3:34
Population Proportion and Sample Proportion Notations
3:35
What's the Difference?
9:19
SDoM vs. SDoSP: Type of Data
9:20
SDoM vs. SDoSP: Shape
11:24
SDoM vs. SDoSP: Center
12:30
SDoM vs. SDoSP: Spread
15:34
Binomial Distribution vs. Sampling Distribution of Sample Proportions
19:14
Binomial Distribution vs. SDoSP: Type of Data
19:17
Binomial Distribution vs. SDoSP: Shape
21:07
Binomial Distribution vs. SDoSP: Center
21:43
Binomial Distribution vs. SDoSP: Spread
24:08
Example 1: Sampling Distribution of Sample Proportions
26:07
Example 2: Sampling Distribution of Sample Proportions
37:58
Example 3: Sampling Distribution of Sample Proportions
44:42
Example 4: Sampling Distribution of Sample Proportions
45:57
Section 10: Inferential Statistics
Introduction to Confidence Intervals

42m 53s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Inferential Statistics
0:50
Inferential Statistics
0:51
Two Problems with This Picture…
3:20
Two Problems with This Picture…
3:21
Solution: Confidence Intervals (CI)
4:59
Solution: Hypotheiss Testing (HT)
5:49
Which Parameters are Known?
6:45
Which Parameters are Known?
6:46
Confidence Interval - Goal
7:56
When We Don't Know m but know s
7:57
When We Don't Know
18:27
When We Don't Know m nor s
18:28
Example 1: Confidence Intervals
26:18
Example 2: Confidence Intervals
29:46
Example 3: Confidence Intervals
32:18
Example 4: Confidence Intervals
38:31
t Distributions

1h 2m 6s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
When to Use z vs. t?
1:07
When to Use z vs. t?
1:08
What is z and t?
3:02
z-score and t-score: Commonality
3:03
z-score and t-score: Formulas
3:34
z-score and t-score: Difference
5:22
Why not z? (Why t?)
7:24
Why not z? (Why t?)
7:25
But Don't Worry!
15:13
Gossett and t-distributions
15:14
Rules of t Distributions
17:05
t-distributions are More Normal as n Gets Bigger
17:06
t-distributions are a Family of Distributions
18:55
Degrees of Freedom (df)
20:02
Degrees of Freedom (df)
20:03
t Family of Distributions
24:07
t Family of Distributions : df = 2 , 4, and 60
24:08
df = 60
29:16
df = 2
29:59
How to Find It?
31:01
'Student's t-distribution' or 't-distribution'
31:02
Excel Example
33:06
Example 1: Which Distribution Do You Use? Z or t?
45:26
Example 2: Friends on Facebook
47:41
Example 3: t Distributions
52:15
Example 4: t Distributions , confidence interval, and mean
55:59
Introduction to Hypothesis Testing

1h 6m 33s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Issues to Overcome in Inferential Statistics
1:35
Issues to Overcome in Inferential Statistics
1:36
What Happens When We Don't Know What the Population Looks Like?
2:57
How Do We Know whether a sample is Sufficiently Unlikely
3:43
Hypothesizing a Population
6:44
Hypothesizing a Population
6:45
Null Hypothesis
8:07
Alternative Hypothesis
8:56
Hypotheses
11:58
Hypotheses
11:59
Errors in Hypothesis Testing
14:22
Errors in Hypothesis Testing
14:23
Steps of Hypothesis Testing
21:15
Steps of Hypothesis Testing
21:16
Single Sample HT ( When Sigma Available)
26:08
Example: Average Facebook Friends
26:09
Step1
27:08
Step 2
27:58
Step 3
28:17
Step 4
32:18
Single Sample HT (When Sigma Not Available)
36:33
Example: Average Facebook Friends
36:34
Step1: Hypothesis Testing
36:58
Step 2: Significance Level
37:25
Step 3: Decision Stage
37:40
Step 4: Sample
41:36
Sigma and p-value
45:04
Sigma and p-value
45:05
On tailed vs. Two Tailed Hypotheses
45:51
Example 1: Hypothesis Testing
48:37
Example 2: Heights of Women in the US
57:43
Example 3: Select the Best Way to Complete This Sentence
1:03:23
Confidence Intervals for the Difference of Two Independent Means

55m 14s

Intro
0:00
Roadmap
0:14
Roadmap
0:15
One Mean vs. Two Means
1:17
One Mean vs. Two Means
1:18
Notation
2:41
A Sample! A Set!
2:42
Mean of X, Mean of Y, and Difference of Two Means
3:56
SE of X
4:34
SE of Y
6:28
Sampling Distribution of the Difference between Two Means (SDoD)
7:48
Sampling Distribution of the Difference between Two Means (SDoD)
7:49
Rules of the SDoD (similar to CLT!)
15:00
Mean for the SDoD Null Hypothesis
15:01
Standard Error
17:39
When can We Construct a CI for the Difference between Two Means?
21:28
Three Conditions
21:29
Finding CI
23:56
One Mean CI
23:57
Two Means CI
25:45
Finding t
29:16
Finding t
29:17
Interpreting CI
30:25
Interpreting CI
30:26
Better Estimate of s (s pool)
34:15
Better Estimate of s (s pool)
34:16
Example 1: Confidence Intervals
42:32
Example 2: SE of the Difference
52:36
Hypothesis Testing for the Difference of Two Independent Means

50m

Intro
0:00
Roadmap
0:06
Roadmap
0:07
The Goal of Hypothesis Testing
0:56
One Sample and Two Samples
0:57
Sampling Distribution of the Difference between Two Means (SDoD)
3:42
Sampling Distribution of the Difference between Two Means (SDoD)
3:43
Rules of the SDoD (Similar to CLT!)
6:46
Shape
6:47
Mean for the Null Hypothesis
7:26
Standard Error for Independent Samples (When Variance is Homogenous)
8:18
Standard Error for Independent Samples (When Variance is not Homogenous)
9:25
Same Conditions for HT as for CI
10:08
Three Conditions
10:09
Steps of Hypothesis Testing
11:04
Steps of Hypothesis Testing
11:05
Formulas that Go with Steps of Hypothesis Testing
13:21
Step 1
13:25
Step 2
14:18
Step 3
15:00
Step 4
16:57
Example 1: Hypothesis Testing for the Difference of Two Independent Means
18:47
Example 2: Hypothesis Testing for the Difference of Two Independent Means
33:55
Example 3: Hypothesis Testing for the Difference of Two Independent Means
44:22
Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means

1h 14m 11s

Intro
0:00
Roadmap
0:09
Roadmap
0:10
The Goal of Hypothesis Testing
1:27
One Sample and Two Samples
1:28
Independent Samples vs. Paired Samples
3:16
Independent Samples vs. Paired Samples
3:17
Which is Which?
5:20
Independent SAMPLES vs. Independent VARIABLES
7:43
independent SAMPLES vs. Independent VARIABLES
7:44
T-tests Always…
10:48
T-tests Always…
10:49
Notation for Paired Samples
12:59
Notation for Paired Samples
13:00
Steps of Hypothesis Testing for Paired Samples
16:13
Steps of Hypothesis Testing for Paired Samples
16:14
Rules of the SDoD (Adding on Paired Samples)
18:03
Shape
18:04
Mean for the Null Hypothesis
18:31
Standard Error for Independent Samples (When Variance is Homogenous)
19:25
Standard Error for Paired Samples
20:39
Formulas that go with Steps of Hypothesis Testing
22:59
Formulas that go with Steps of Hypothesis Testing
23:00
Confidence Intervals for Paired Samples
30:32
Confidence Intervals for Paired Samples
30:33
Example 1: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
32:28
Example 2: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
44:02
Example 3: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
52:23
Type I and Type II Errors

31m 27s

Intro
0:00
Roadmap
0:18
Roadmap
0:19
Errors and Relationship to HT and the Sample Statistic?
1:11
Errors and Relationship to HT and the Sample Statistic?
1:12
Instead of a Box…Distributions!
7:00
One Sample t-test: Friends on Facebook
7:01
Two Sample t-test: Friends on Facebook
13:46
Usually, Lots of Overlap between Null and Alternative Distributions
16:59
Overlap between Null and Alternative Distributions
17:00
How Distributions and 'Box' Fit Together
22:45
How Distributions and 'Box' Fit Together
22:46
Example 1: Types of Errors
25:54
Example 2: Types of Errors
27:30
Example 3: What is the Danger of the Type I Error?
29:38
Effect Size & Power

44m 41s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Distance between Distributions: Sample t
0:49
Distance between Distributions: Sample t
0:50
Problem with Distance in Terms of Standard Error
2:56
Problem with Distance in Terms of Standard Error
2:57
Test Statistic (t) vs. Effect Size (d or g)
4:38
Test Statistic (t) vs. Effect Size (d or g)
4:39
Rules of Effect Size
6:09
Rules of Effect Size
6:10
Why Do We Need Effect Size?
8:21
Tells You the Practical Significance
8:22
HT can be Deceiving…
10:25
Important Note
10:42
What is Power?
11:20
What is Power?
11:21
Why Do We Need Power?
14:19
Conditional Probability and Power
14:20
Power is:
16:27
Can We Calculate Power?
19:00
Can We Calculate Power?
19:01
How Does Alpha Affect Power?
20:36
How Does Alpha Affect Power?
20:37
How Does Effect Size Affect Power?
25:38
How Does Effect Size Affect Power?
25:39
How Does Variability and Sample Size Affect Power?
27:56
How Does Variability and Sample Size Affect Power?
27:57
How Do We Increase Power?
32:47
Increasing Power
32:48
Example 1: Effect Size & Power
35:40
Example 2: Effect Size & Power
37:38
Example 3: Effect Size & Power
40:55
Section 11: Analysis of Variance
F-distributions

24m 46s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
Z- & T-statistic and Their Distribution
0:34
Z- & T-statistic and Their Distribution
0:35
F-statistic
4:55
The F Ration ( the Variance Ratio)
4:56
F-distribution
12:29
F-distribution
12:30
s and p-value
15:00
s and p-value
15:01
Example 1: Why Does F-distribution Stop At 0 But Go On Until Infinity?
18:33
Example 2: F-distributions
19:29
Example 3: F-distributions and Heights
21:29
ANOVA with Independent Samples

1h 9m 25s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
The Limitations of t-tests
1:12
The Limitations of t-tests
1:13
Two Major Limitations of Many t-tests
3:26
Two Major Limitations of Many t-tests
3:27
Ronald Fisher's Solution… F-test! New Null Hypothesis
4:43
Ronald Fisher's Solution… F-test! New Null Hypothesis (Omnibus Test - One Test to Rule Them All!)
4:44
Analysis of Variance (ANoVA) Notation
7:47
Analysis of Variance (ANoVA) Notation
7:48
Partitioning (Analyzing) Variance
9:58
Total Variance
9:59
Within-group Variation
14:00
Between-group Variation
16:22
Time out: Review Variance & SS
17:05
Time out: Review Variance & SS
17:06
F-statistic
19:22
The F Ratio (the Variance Ratio)
19:23
S²bet = SSbet / dfbet
22:13
What is This?
22:14
How Many Means?
23:20
So What is the dfbet?
23:38
So What is SSbet?
24:15
S²w = SSw / dfw
26:05
What is This?
26:06
How Many Means?
27:20
So What is the dfw?
27:36
So What is SSw?
28:18
Chart of Independent Samples ANOVA
29:25
Chart of Independent Samples ANOVA
29:26
Example 1: Who Uploads More Photos: Unknown Ethnicity, Latino, Asian, Black, or White Facebook Users?
35:52
Hypotheses
35:53
Significance Level
39:40
Decision Stage
40:05
Calculate Samples' Statistic and p-Value
44:10
Reject or Fail to Reject H0
55:54
Example 2: ANOVA with Independent Samples
58:21
Repeated Measures ANOVA

1h 15m 13s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
The Limitations of t-tests
0:36
Who Uploads more Pictures and Which Photo-Type is Most Frequently Used on Facebook?
0:37
ANOVA (F-test) to the Rescue!
5:49
Omnibus Hypothesis
5:50
Analyze Variance
7:27
Independent Samples vs. Repeated Measures
9:12
Same Start
9:13
Independent Samples ANOVA
10:43
Repeated Measures ANOVA
12:00
Independent Samples ANOVA
16:00
Same Start: All the Variance Around Grand Mean
16:01
Independent Samples
16:23
Repeated Measures ANOVA
18:18
Same Start: All the Variance Around Grand Mean
18:19
Repeated Measures
18:33
Repeated Measures F-statistic
21:22
The F Ratio (The Variance Ratio)
21:23
S²bet = SSbet / dfbet
23:07
What is This?
23:08
How Many Means?
23:39
So What is the dfbet?
23:54
So What is SSbet?
24:32
S² resid = SS resid / df resid
25:46
What is This?
25:47
So What is SS resid?
26:44
So What is the df resid?
27:36
SS subj and df subj
28:11
What is This?
28:12
How Many Subject Means?
29:43
So What is df subj?
30:01
So What is SS subj?
30:09
SS total and df total
31:42
What is This?
31:43
What is the Total Number of Data Points?
32:02
So What is df total?
32:34
so What is SS total?
32:47
Chart of Repeated Measures ANOVA
33:19
Chart of Repeated Measures ANOVA: F and Between-samples Variability
33:20
Chart of Repeated Measures ANOVA: Total Variability, Within-subject (case) Variability, Residual Variability
35:50
Example 1: Which is More Prevalent on Facebook: Tagged, Uploaded, Mobile, or Profile Photos?
40:25
Hypotheses
40:26
Significance Level
41:46
Decision Stage
42:09
Calculate Samples' Statistic and p-Value
46:18
Reject or Fail to Reject H0
57:55
Example 2: Repeated Measures ANOVA
58:57
Example 3: What's the Problem with a Bunch of Tiny t-tests?
1:13:59
Section 12: Chi-square Test
Chi-Square Goodness-of-Fit Test

58m 23s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Where Does the Chi-Square Test Belong?
0:50
Where Does the Chi-Square Test Belong?
0:51
A New Twist on HT: Goodness-of-Fit
7:23
HT in General
7:24
Goodness-of-Fit HT
8:26
Hypotheses about Proportions
12:17
Null Hypothesis
12:18
Alternative Hypothesis
13:23
Example
14:38
Chi-Square Statistic
17:52
Chi-Square Statistic
17:53
Chi-Square Distributions
24:31
Chi-Square Distributions
24:32
Conditions for Chi-Square
28:58
Condition 1
28:59
Condition 2
30:20
Condition 3
30:32
Condition 4
31:47
Example 1: Chi-Square Goodness-of-Fit Test
32:23
Example 2: Chi-Square Goodness-of-Fit Test
44:34
Example 3: Which of These Statements Describe Properties of the Chi-Square Goodness-of-Fit Test?
56:06
Chi-Square Test of Homogeneity

51m 36s

Intro
0:00
Roadmap
0:09
Roadmap
0:10
Goodness-of-Fit vs. Homogeneity
1:13
Goodness-of-Fit HT
1:14
Homogeneity
2:00
Analogy
2:38
Hypotheses About Proportions
5:00
Null Hypothesis
5:01
Alternative Hypothesis
6:11
Example
6:33
Chi-Square Statistic
10:12
Same as Goodness-of-Fit Test
10:13
Set Up Data
12:28
Setting Up Data Example
12:29
Expected Frequency
16:53
Expected Frequency
16:54
Chi-Square Distributions & df
19:26
Chi-Square Distributions & df
19:27
Conditions for Test of Homogeneity
20:54
Condition 1
20:55
Condition 2
21:39
Condition 3
22:05
Condition 4
22:23
Example 1: Chi-Square Test of Homogeneity
22:52
Example 2: Chi-Square Test of Homogeneity
32:10
Section 13: Overview of Statistics
Overview of Statistics

18m 11s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
The Statistical Tests (HT) We've Covered
0:28
The Statistical Tests (HT) We've Covered
0:29
Organizing the Tests We've Covered…
1:08
One Sample: Continuous DV and Categorical DV
1:09
Two Samples: Continuous DV and Categorical DV
5:41
More Than Two Samples: Continuous DV and Categorical DV
8:21
The Following Data: OK Cupid
10:10
The Following Data: OK Cupid
10:11
Example 1: Weird-MySpace-Angle Profile Photo
10:38
Example 2: Geniuses
12:30
Example 3: Promiscuous iPhone Users
13:37
Example 4: Women, Aging, and Messaging
16:07
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Statistics
Bookmark & Share Embed

Share this knowledge with your friends!

Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
  ×
  • - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.
  • Discussion

  • Answer Engine

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (8)

0 answers

Post by Thuy Nguyen on December 2, 2016

Hi Professor Son, I thought we reject the null when it falls below our critical value.  But 7.97 is greater than 2.23.

Why did we reject the null?

0 answers

Post by Thuy Nguyen on December 2, 2016

Hello Professor Son, I don't understand why we didn't use the two-tail hypothesis test on the Example #1 (freezing water test).  When and why do we use the one-tail hypothesis test vs. the two-tail hypothesis test?

1 answer

Last reply by: Professor Son
Tue Oct 28, 2014 1:06 PM

Post by Temitayo Akinshilo on October 26, 2014

When doing the SDoM drawings I see that you switch a lot from percentage and decimal format, it gets confusing. Also I spent a lot on the book is it possible to see you use the t and/ or z table from it as opposed to excel.
Thanks

0 answers

Post by Christopher Hu on December 25, 2013

Good stuff

0 answers

Post by Jennifer DeMott on March 16, 2013

Love the excel stuff!!! Keep it in!!!! It has really helped me learn how to use excel and how to do calculations way faster. However, notes are one reason why I might not continue service; they are so time consuming to download individually(why not in one PDF?)not to mention they have so many repeating pages with pretty much same info on them and then after printing 32 pages of notes, all the slides are not there. No example three this time!!!! Great lectures by the way!

0 answers

Post by Najam ul hassan Awan on January 4, 2013

Way way way too much dependence on excel!
Made me abuse her very badly !!!

0 answers

Post by Charles Forth on May 31, 2012

How do you calculate the p-value for the t- statistic without excel?

Introduction to Hypothesis Testing

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Roadmap 0:06
    • Roadmap
  • Issues to Overcome in Inferential Statistics 1:35
    • Issues to Overcome in Inferential Statistics
    • What Happens When We Don't Know What the Population Looks Like?
    • How Do We Know whether a sample is Sufficiently Unlikely
  • Hypothesizing a Population 6:44
    • Hypothesizing a Population
    • Null Hypothesis
    • Alternative Hypothesis
  • Hypotheses 11:58
    • Hypotheses
  • Errors in Hypothesis Testing 14:22
    • Errors in Hypothesis Testing
  • Steps of Hypothesis Testing 21:15
    • Steps of Hypothesis Testing
  • Single Sample HT ( When Sigma Available) 26:08
    • Example: Average Facebook Friends
    • Step1
    • Step 2
    • Step 3
    • Step 4
  • Single Sample HT (When Sigma Not Available) 36:33
    • Example: Average Facebook Friends
    • Step1: Hypothesis Testing
    • Step 2: Significance Level
    • Step 3: Decision Stage
    • Step 4: Sample
  • Sigma and p-value 45:04
    • Sigma and p-value
    • On tailed vs. Two Tailed Hypotheses
  • Example 1: Hypothesis Testing 48:37
  • Example 2: Heights of Women in the US 57:43
  • Example 3: Select the Best Way to Complete This Sentence 1:03:23

Transcription: Introduction to Hypothesis Testing

Hi and welcome to www.educator.com.0000

We are going to be talking about hypothesis testing today.0002

The first thing we need to do is situate ourselves where do hypothesis testing fit in with all of inferential statistics.0005

We are going to talk about how to create the hypothesis that we are going to test and that hypothesis is going to be about a population.0015

When we say about a population we mean about population parameters.0023

There is actually two parts to any hypothesis that we test.0028

There is the no hypothesis and the alternative hypothesis.0033

We are going to talk about how they fit together.0036

We are going to talk about potential errors in hypothesis testing because it is good to know going into it.0039

Finally, we are going to end with the steps of hypothesis testing and we are going to do the steps of hypothesis testing,0045

When sigma the population standard deviation is given and when it is not given.0051

And if you had just refresh yourself with the confidence interval lesson,0057

You can probably guess that when sigma is given we are going to be using z distributions or normal distributions.0063

When sigma is not given and we have to estimate the population standard deviation from the sample using s then we will use t-distributions.0072

In order to use the t distribution we need to figure out the degrees of freedom.0086

Let us go back and situate ourselves with all of inferential statistics.0094

Basically the idea of inferential statistics is that we use some known populations to figure out the sampling distribution.0101

The one that we are using a lot is the SDOM.0115

We are going to use the another one later.0121

We figure out sampling distributions and now we want to compare a sample from an unknown distribution.0123

We want to compare sample from that to the sampling distribution.0136

If the sampling distribution says the sample is very likely then we might say maybe the sample,0145

this unknown population is very similar to the known population.0154

But if the sampling distribution tells us the sample was very unlikely then we could rule out0159

the known population as a potential candidate for this unknown population.0169

In doing all of this in inferential statistics there are two issues that come up.0176

What happens when we do not know what the population looks like at all and0183

We want to try to figure out where the population mean or different parameters of the population might be.0188

In that case we use confidence intervals and when we use confidence intervals we try to figure out where mu is from x bar.0195

Another way of thinking about it is we try to figure out something about the population0211

From the sample information because we have that sample information.0216

Another technique that we could take is that we could use this idea and say how do we decide when a sample is unlikely?0220

How do we decide when to draw x?0235

When do we decide this side is weird?0238

In order to do that we now have to learn about hypothesis testing.0243

The goal of hypothesis testing is to be slightly different from confidence interval yet related.0248

It is the flip side of the coin.0254

Basically, you are going to try to figure out whether your x bar is unlikely given a hypothetical population.0256

In that case, what we are doing is we are setting up a population.0281

It is like the population is stable and we are going to compare the sample to it.0290

Here is our sample and here is our set standard.0296

Here the population is moving but this is the target and this is what we use to get that target.0305

Here this is already set and we are comparing this guy to this guy.0316

In this way you need both confidence intervals and hypothesis testing to give you the full story.0323

You might also hear that hypothesis testing another word or phrase for it will be a test of significance.0331

A lot of students misinterpret that to be a test of importance.0343

That is the modern way the word significance is used but that is not actually what we are talking about here.0348

When we call this at test of significance this is actually using the meaning of significance0354

from the early 20th century when this test was actually invented.0367

Back then significant adjustment prominence or standing out.0370

I like to think of it as being weird like how much does this sample stand out?0377

Is that significant?0386

Is it prominent and different or is it very, very similar?0387

Those are the ways you could think about it.0392

I do not want to think of it as a test of importance.0398

Now that we know why we need hypothesis testing, how do we hypothesize the population?0401

How do we make up a population?0411

Do we have to make up all the individual numbers of the population?0413

What do we got to do?0415

Here is the thing, we could assume things about population parameters and test those assumptions.0417

We do not have to stimulate every single member of the population we could just make some assumptions about parameters.0424

In order to set up a hypothetical population you set up a parameter.0431

For instance, you say mu is equal to something.0437

That is how you set up a population then check whether our sample is likely to have come from such a population.0440

In doing this we need to figure out how to we hypothesize rigorously so that we could get as much paying for our book from our hypothesis?0448

In order to do this we have two parts to a hypothesis and this is going to make our hypothesis better.0462

The first part of hypothesis is what we call the null hypothesis and null means 0 or not important.0472

The null hypothesis in this case is your hypothetical population.0487

We write the null hypothesis like this h sub 0 or h sub knot.0492

We might say mu= 0.0502

We have created a null hypothesis.0507

I just made up to 0 but there are better ways of doing this and we will talk about those later.0510

We could also write this in terms of standard deviation or other things but frequently0516

you will see the mean being the hypothesis of the population.0532

The alternative hypothesis is what do we learn if this is not true?0536

If we rule this out then what have we learned?0544

In that way these two make up the full hypothesis.0548

If we find this then we learn this.0554

If we do not find that we learn this other thing.0557

What we learn if this is not true is at least that mu does not equal 0.0560

This is called the alternative hypothesis and it helps us at least figure out something when we do not figure out that.0566

If we do not find this to be true at least we find this to be true.0575

If this is not true then we will always find this to be true.0580

These two hypotheses together this is more powerful than just having one hypothesis alone.0584

We will talk a little bit about why and it goes back to that idea of the test of significance.0597

Hypothesis testing or the test of significance is a test of weirdness.0607

It tests how weird the x bar is.0617

This is the question that it can answer is the x bar weird?0625

Is it different from the population?0633

But it actually tell is x bar very similar to the population?0636

That is not what number gives you but only tells you how weird it is.0642

It does not tell you how similar it is.0646

These are actually not flip sides of the same coin and because of that our goal here in all0648

of hypothesis testing is we find out the most when we reject the null hypotheses.0658

That is when we would find out the most.0668

This may not seem like we are finding out of luck because we ruled out 0.0671

There is an infinite number of mu that we need to test but actually in hypothesis testing0676

what you want to do is reject the no rather than accept or fail to reject the null.0682

Just because it is set up as a test of weirdness that is the only thing you can find out.0690

It is true that it would be nice if we can find out more than that but that is the limitation of this hypothesis testing.0695

It is a limitation that is also like the fact of life because even as the limitations this hypothesis testing still a powerful tool.0703

But it is good to keep in mind that this one is a limitation.0714

A little bit more about these two hypotheses.0716

These two hypotheses, the null and the alternative, sometimes you might see the alternative written as h sub 1.0722

They must be mutually exclusive.0729

This means if one is true the other cannot be true.0732

If the other is true, the first cannot be true.0736

You cannot have a null hypotheses and alternative hypotheses like mu=1 and mu=2.0739

They are not mutually exclusive.0748

If one is false, the other one does not have to be true.0751

It could be true but it does not have to be.0755

Whereas mu does not equal 1, mu = 1.0758

Those are mutually exclusive.0763

If you rule out one you absolutely know that the other one has to be true.0764

Together they must include all possible values of the parameter.0768

You can think of the parameters such as mu on a number line and you need to cover the entire number line.0774

You can have a null hypothesis like mu > 0.0782

You might say mu >0 but then your alternative hypotheses have to be mu < or = 0.0788

You color that in and color all of that in too because that is where you will cover the entire space, the parameter space.0800

If these are both true, here is what you get.0811

One of these two hypotheses must represent the true condition of the population.0815

You find out something that is true about the population and then as we said before,0821

typically in research your goal is to reject the null and find support for the alternative hypothesis.0827

You can actually prove the null hypothesis but you can reject the null hypotheses.0833

And the whole reason is because hypothesis testing is a test of significance or test of weirdness.0838

This x bar stands out.0848

You can only tell me whether it stands out a lot from the population or not.0851

They can tell me it is probably similar to the population.0856

You cannot tell me that part.0860

Let us talk about some errors that we could potentially make in hypothesis testing.0862

There are some foibles, you need to watch out for.0868

Well first, it helps to imagine that there are two potential realities and we do not know which one of them is true.0871

One is that the null hypothesis is true.0883

It is actually true.0887

We do not know yet, but it is true.0888

Other possible reality is that the null hypothesis is false.0892

Your sample did not come from the population.0898

Those are your two possible realities but only one can be true at any given time.0901

You cannot have both the null population being true and false at the same time.0907

You got to have one or the other.0915

These two boxes, this one and this one have to add up to 100%, but these two boxes , this one and this one have to add up to 1.0916

That is because we have a 100% possibility of this being true and 100% possibility of this being true.0934

If this is true then this is not true.0942

Given that this is reality but we do not know reality, what is the deal?0944

How do we put that together with hypothesis testing?0955

When we do have hypothesis testing we have 1 of 2 outcomes.0959

We could either reject the null successfully, that is what we wanted to do.0964

We could either reject the null or we can fail to reject the null.0968

We do not call this accept the alternative or accepting the null.0972

We call it failing to reject because that is how much we wish we could have rejected the null.0980

We failed to reject the null.0987

Let us think about these two decisions in conjunction with reality.0989

Here is the thing, when we reject the null hypothesis and say this sample did not come from the population.0997

If it did not come from that population we would be correct here.1006

This would be a correct decision.1011

If this is our decision and this is indeed the world we live in, this is a correct decision.1014

If we fail to reject the null however but the null is actually true we should not have rejected it1021

then this also represents a correct decision.1034

Good job not rejecting the null because it is right all along.1039

These two are ways that we could be correct.1044

That leaves us two ways that we could be incorrect.1048

One way is this, we could successfully reject the null but the null is actually true but1051

we said that it is false but the null is actually true.1063

This is an incorrect decision.1068

We call this a false alarm because we are rejecting that now.1074

It is false alarm we should have not rejected that null.1084

The probability of that false alarm is represented by the term alpha.1088

On the other hand, there is another way that we could be wrong and that way is this.1097

We could fail to reject the null.1107

We could say we may not be wrong.1109

We fail to reject it but the null is wrong.1114

This is also an incorrect decision.1121

This is not called a false alarm instead it is called a miss.1127

This is going to be called the beta rate.1134

Obviously the alpha and the beta have a probability of less than 1, but greater than 0.1143

What we want to do in hypothesis testing is reduce our chance of errors.1150

We can also figure out what is our probability of getting different kinds of correct decisions?1157

We know that this is one version of the world and that should add up to 100% this probability of failing to reject when we should have kept it around.1167

This probability is 1 – alpha.1183

This is what we call a correct failure.1188

It sounds odd but it sounds good that you have failed.1198

You failed to reject it and you should have failed to reject it.1203

It is like you failed to reject a date and you know that date was really good.1208

He is a good guy so you should have failed to reject him.1216

On the other hand, this is another possible set of what could be right in the world.1225

This should add up to 100%, so this should be 1 – beta.1232

That is our rate of correct decision where we successfully rejected the null and it is indeed false.1238

In dating it might be reject somebody who comes up to you and good job you should have rejected them.1245

They are a total loser.1253

That is what we call a hit.1255

It is like in a battleship when you hit it.1258

This is the hit rate, miss rate, false alarm rate, and the correct failure rate.1263

Let us talk about the steps of hypothesis testing.1272

Well there are going to be 5 steps.1281

The first step just starts out with setting up your hypothetical population.1284

This is the hypothetical population and you need to create both a null hypothesis and an alternative hypothesis then pick a significance level.1290

You can think of the word significant as a stand outness like how much it standout.1304

How much does it have to standout?1310

When it stands out a lot you have a very low false alarm rate.1313

If your x bar is out there and then you have a small chance of false alarming.1318

You are saying this really does not look like it belongs in the population because it is so out here.1326

And that is where your false alarm rate is low.1335

You want to set a low one.1338

If you want to be more conservative, you want to set an even lower false alarm rate.1340

For instance, alpha = .01 that would be even lower rate of false alarm.1344

Then you want to set a decision stage.1351

So far, we have not done anything except like setting things up yet and still we are setting things up.1355

We set up the decision stage and what you want to do is draw the SDOM, the sampling distribution.1361

We have the hypothetical population and we create a sampling distribution so that we can take our sample1368

and compare it to that sampling distribution.1375

You draw the SDOM and you identify the critical limits.1378

Here is my SDOM and you want to identify the extreme regions where you say if your x bar1383

is somewhere out here then you want to reject the null.1396

You want to say it is very, very unlikely to have come from this null population.1402

Then choose a test statistic because the test statistic will tell you how far out from the mean it is in terms of standard error.1407

How many jumps out you are?1419

This will be called choose a critical test statistics.1421

You are saying what are the extreme boundaries such that if x is outside those boundaries we reject it.1429

If it is inside the boundaries we do not reject.1440

And then we use the sample.1444

This is the first time we are doing anything with the sample.1447

We use the sample and the SDOM from here to compute the sample test statistic and p value.1450

And the p value is going to tell you given that x is out here how much of that curve does it actually cover?1458

What is the probability of false alarming there at that particular value?1468

And then you compare the sample to this SDOM population and you decide to reject the null or not?1476

One word about p value versus alpha.1487

The p value is going to be the probability of belonging to the null population given sample x bar.1494

What is the probability that this value belongs in here?1513

Alpha is what we call the critical limit.1519

This is what we are able to tolerate we just set it.1526

Alpha is often decided just by the scientific community.1532

In fact alpha is often set to something like .05 or .01 because that is commonly accepted in scientific communities.1536

We call that just being by tradition or convention.1546

It is not that we figured out the alpha level.1550

On the other hand we figure out the p value level given our sample x.1553

And what we want is for the p value to be lower than the critical limit.1559

Let us go through some examples.1566

Here is an example of single sample hypothesis testing, also called t tests of 1 mean or single mean t test.1572

This is also another term for it.1594

Let us talk about this when sigma is available.1597

The population standard deviation has been given to us.1601

Here it says that the average Www.www.facebook.com.com user has 230 friends, a sigma of 950, a random sample of college students n=39 showed that the sample mean was 393 friends.1605

Our college students like the average www.www.facebook.com.com user.1620

Let us try to think about this by using hypothesis testing.1624

The first thing is perhaps we should set up the best standard population as the average www.www.facebook.com.com user,1631

the real population of all Www.www.facebook.com.com users.1643

Our null hypothesis might be something like mu= 230.1648

That the null hypothesis is that our college students sample is just like everybody else.1655

The alternative hypothesis is that our samples are not similar to that population.1667

Let us set the significance level.1678

Here we could just use alpha = .05 by convention.1683

We could say that is traditional, we will use that too.1693

Let us set the decision stage.1698

Here we want to start off by drawing the SDOM and I like to label for myself that it is the SDOM1701

just so that I do not get confused and mistake it for the population or something like that.1711

We want to draw a critical limit.1717

If this is the only false alarm that we are willing to tolerate then we might say everything out here we reject.1721

Everything out here we reject.1730

That would mean that everything in here is 95% and out here these two regions together add up to 5%.1734

Because we are going to reject it there is still some probability that this sample belongs to the population.1745

But we are going to reject the null.1751

We need to split up 5% distributed to both sides so this would make this 2.5% and this would be also 2.5%.1754

That is the error that we are going to tolerate.1768

I will color n right now my rejection regions so that means if it out here in the extremes I am going to reject my null hypothesis.1771

And because we know that this SDOM comes from the population, that is how we are creating this SDOM.1783

We know that the mu of SDOM is exactly equal to the mu of the population so that will be 230.1792

Mu sub x bar = 230.1801

We can also figure out the sigma sub x bar and that would be just sigma ÷ √n Which is 950 ÷ √239.1805

You could just pull out a calculator to do this.1819

I am just going to use the blank Excel file and here is 950 ÷ √239= 61.5.1823

That is my standard error of this population.1839

And what I want to know is it is nice to have that but if it would also be nice to know what is the z score out here?1848

We use z score because we are using sigma.1856

What is the z score out here?1861

Actually I had just made you memorize it when we previously talked about confidence intervals so we know that is 1.96 and -1.96.1864

If you wanted to you could also figure it out by using either the table in the back of your book or Excel1876

so we could put in normsin because we have the probability.1885

I want the two tailed probability this is actually one tailed.1890

The one tailed probability is going to be .025 way down here.1902

This little bottom part down here it is covered .025 of this and Excel is telling me that the z score right there is about 1.96.1910

Now that we have all of that settled, we could start tinkering with our actual sample.1924

Let me draw some space here.1933

Let us talk about our sample.1938

When we talk about our sample we should figure out how far away is our sample mean?1942

We just do not want to know in terms of how far away they are in terms of friends but we want to know1955

how far away in terms of the standard deviation because only standard deviation will tell us what proportion of the curve is colored.1962

Even if we find out the actual raw distance away 163, we do not know where that is in relation to this curve.1971

It would be nice if we could find the z score of 393 then we will know where it is in relation to this curve.1983

That would be 393 – 230 so how far is it away from 230, all divided by the standard error 61.51990

because that will give me how many standard errors away we are.2002

Let me just calculate that.2007

That would be 393 - 230 and I need parentheses because I need it to do the subtraction before the division and that gives me 2.65.2011

My z score is 2.65.2032

Here this maybe 1 z score away, this is almost 2 z scores away and let us say this is 3 z scores away.2036

I know that my 393 is somewhere around here because it is around 2.65.2049

This area is very tiny, so I need to find the p value here.2061

What is the p value here?2070

What is the probability that x bar is greater than or equal to 393?2072

That equals the probability that z is greater than or equal to 2.65.2091

Not only that but remember we have a two tailed hypothesis.2100

We are interested in either being greater than or less than the mean.2106

We actually have to find this thing out and multiply it by 2.2112

What you can do is look this up in the back of your book and multiply it by 2 or Excel will actually calculate it for you2117

like you could put in normsdist and put in the negative side because normsdist gives it to me going from the negative side to positive side.2128

I am going to color this part first.2143

-2.65 and it should be a very tiny number that will be .004.2144

That is a tiny number and then we take that one side and we multiply it by 2 to give us our p value.2153

What we are really doing is we are coloring this base, pretend that is inside and also getting -2.652160

and coloring that space and adding those two together.2179

That will give us .008.2183

What about a single sample hypothesis test when sigma is not available?2188

Well this is the exact same problem in fact I have crossed this out so you can no longer use it.2201

It is no longer available to you.2208

Here what we have to do is estimate sigma and use s instead of sigma.2212

Let us go ahead and start off just hypothesis testing.2219

Our null hypothesis is mu=230 that are our sample of college students is just like everybody else.2222

Our alternative is that they are different from everybody else.2233

Different in some way, either have more friends or less friends.2239

We also need to pick a significance level.2244

How extreme does this x bar have to be?2248

We are going to pick alpha=.05 just by convention we do not figure it out or anything.2255

And then we need to set our decision stage.2260

Here we want to start off by drawing our SDOM helps to keep this in mind that this is a bunch of means, a bunch of x bars.2264

We can just use this information because this is our known population.2276

We are going to use that information to figure out our SDOM.2284

Here we run into the problem how can we figure out standard error?2288

Well, we cannot figure out sigma sub x bar but we can actually figure out s sub x bar.2294

That standard error using s instead of sigma.2302

That will be s(x) ÷ √n.2307

We have s for more sample, the standard deviation of our sample which is 447 ÷ v239.2316

And I will just pull out my Excel in order to calculate this.2326

447 ÷ v239 and I get 28.9.2346

I am actually going to draw in my rejection regions, anything more extreme is going to be rejected.2356

Fail to reject in the middle and this rejection region is .025 and this rejection region is .025 because2375

I need to split that significance level in 2.2389

What we do here is we want to figure out what is our actual t statistic?2393

How many standard errors we are when we talk about these borders?2404

What is our critical t?2408

That would be the t values here.2410

This is our raw values in terms of friends but we want to know it in terms of standard error.2413

Here are our t values so we cannot just put in 1.96 because that would be for z distributions.2418

We need a t distribution and in order to find a t distribution we need degrees of freedom.2426

The degrees of freedom is n-1 and that is 238 because 239 – 1.2434

You can either look this up in the back of your book or I am going to look this up on Excel.2443

Here I am going to use my t inverse and I put in my two tailed probability .05 and my degrees of freedom which is 238.2451

And I get 1.97.2465

1.97 and -1.97 because t distributions has many problems as they have they are perfectly symmetrical.2470

Those are critical t.2485

That is the boundary t values.2488

Now we have all of that, now we can start thinking about our sample.2491

Let us think about our samples t and p value.2499

The sample t would be the distance that our sample is away from our mean ÷ standard error because we want how many standard errors away we are.2505

393 - 230 ÷ standard error 28.9.2523

I will put that into my Excel 393 – 230 ÷ 28.9 = 5.6.2532

Let us find the p value there.2546

We know that it is far out here our t value so this is about 2, 4, 5.6.2552

It is way out here.2560

Imagine this going all the way out here.2562

That is where x bar landed.2565

Already we know that it is pretty far out but let us find the precise p value there.2569

In order to find the p value we want to use t dist because that is going to give us the probability.2577

We put in the x and that is Excel's word for t.2583

When you see x here in t distribution just put in your t value and it only accepts positive t values.2588

I will just point to this one, our degrees of freedom which is 238 and how many tails?2600

We have a two tailed hypothesis.2609

We get 4.8 × 10-8 so that would be our p value.2612

Our probability of getting a t that is greater than or equal to 5.64 or t is less than or equal to -5.64 because it is two tailed equals 4.8 × 10-8.2624

Imagine .07 × 48 and so that is the pretty number.2658

This number is so small that they cannot even show you the decimal places.2669

It is super close to there but not 0.2677

This is our p value, is the p value less than .05?2680

Indeed it is.2686

What do we do?2688

We reject the null hypothesis.2691

This is what we do when sigma is not available.2695

Just to recap about alpha versus p value.2702

P value is the probability of seeing that sample t or an even more extreme statistic given that the null hypothesis is true.2709

And we say extreme because they can be like way bigger or ways smaller either side right.2720

Alpha gives you the level of significance.2729

That level of extremeness that you have to reach in order to reject your null.2733

This is the set standard.2739

And this is the thing that you are going to compare to that set standard.2742

I want to talk briefly about one versus two-tailed hypotheses.2751

When we talk about a one tailed hypothesis, you might have something like mu is going to be greater than 0.2757

Or your alternative will be mu is less than 0.2768

If that is the case and your set alpha level is .05 then here is what you would do in your SDOM.2777

You will only use one side of it because you are not interested if your x values are way up here.2786

You only care if your x value is way smaller than your population.2798

In this case, you might set up this as your rejections zone and notice that it only on one side because one tailed and these are end tails.2805

That probability will be .05 and this failed to reject side will be .95.2817

This is a one tailed hypothesis.2830

Frequently we will be dealing with two tailed hypotheses.2833

In that case that might be that you do not really care.2838

We do not really care if mu is less than, way smaller or way bigger than what we expected.2845

We just care if it is extreme in some way, different in some way.2854

We do not really care which way and that would be mu = 0 and the alternatives is that mu do not = 0.2858

If we had something like alpha = .05 in a two-tailed hypotheses then we would split up2868

that rejection region into the two-tails so that will be .025 and .025.2879

We reject , we reject, but inside of these boundaries we fail to reject and this is 90.95%.2889

Whatever p value you find we want to compare it to the set alpha level.2906

Let us talk about some examples.2915

Your chemistry text book says that if you dissolve table salt and water the freezing point will be lower than it is for pure water 32°f.2920

To test this theory, your school does an experiment with 15 teams of students dissolved salt and water and put them in the freezer with the digital thermometer.2931

Periodically checking to observe the temperature at which the solution freezes.2940

The data is shown in the download below.2945

What can you conclude from this data?2948

If you look at your download and go to example 1, here are all my freezing temperatures that each of my teams got2951

and I think there are only 14 teams here.2963

Let us suggest that to be 14.2967

What should we do first?2969

Just to give you an example of what it is like to do one tailed hypothesis testing, let us have a one tailed test here.2973

Because it does say that putting the salt and water the freezing point should be lower2982

that automatically gives us a direction that we expect, the freezing point to go in.2990

What would our null or default hypothesis be?2999

The default hypothesis would be that it is not different from pure water.3004

They are the same.3010

It might be something like mu=32°f.3011

But do we care if our samples are all greater than 32°?3019

Maybe the freezing point is higher.3028

Do we really care about that?3032

No not really.3035

Null hypothesis is really that we do not care if it is anything higher than or equal to 32°.3037

What we eventually want to know is it lower like weird in this low direction.3051

The alternative hypothesis is that it is weird, but in a particular direction that it is too low way lower than 32°.3058

Our Alpha is going to be .05, but let us make it clear that it is one tailed.3071

Usually they do not say anything but most people assume two tails as the default.3079

Let us say one tailed.3086

Let us draw this SDOM for the decision stage and here is idea.3088

The default is that all the samples come from a population with 32° is the mean of this SDOM but3096

we want to know is it weird and a lot lower than that?3113

It is consistently lower than that.3126

That is our rejection region and that rejection region is going to be .05 because our fail to reject region is going to be .95.3128

Now that we have that it would be useful to know what our t statistic here.3144

This is raw in terms of degrees Fahrenheit.3150

We also want to know the t statistic.3156

Here at 0 what is the t statistics here that looks like boundary?3159

In order to know that we need to figure out a couple of things.3164

I will start with step 3, one of the things I want to know is that t statistics there.3168

In order to find that t statistics we need to know degrees of freedom for the sample and that is just account how many axis we have in our sample -1?3179

That is 13° of freedom.3193

What is the t value there?3196

We have the probabilities and we want to know the critical t or boundary t.3199

In order to know that we need to use t in here it asks for a two tailed probability.3212

We need a one tailed hypothesis so we have to turn that into a two tail probability.3221

If this was a two-tailed it would it be .1 and the degrees of freedom is 13.3228

It will only give you the positive side, but we could just turn it into -1 because it is perfectly symmetrical.3237

This critical t is -1.77.3248

Okay, now that we have that, we can start on step 4.3252

Step 4 deals with the sample t.3259

In order to find the sample t we probably need to find the mean of sample and that is average and we probably also need to know the standard error.3264

In order to find standard error what we need is s ÷ √n.3289

It is not like for Excel, this is just for me as I need to know s.3299

What is my s?3304

That would just be stdv in all of these.3308

Once I have that then I could calculate standard error s ÷ √n Which is 14.3314

We have a standard error, we have a mean, now we can find our sample t3327

and that is going to be the mean of the sample - the hypothesized mu 32 ÷ the standard error.3334

I get -3.7645.3347

We know that this is much more extreme on the negative side than -1.77.3354

We also need to find the p value.3363

What is the p value there?3366

We need to use pdist because we do not know the probability there.3370

We put in our t value but remember Excel only accept positive one and I am only going to put so two – is +.3376

The degrees of freedom, which is 13 up here and how many tails?3390

Just one.3398

That is going to be .001 p value.3399

Since I have ran out of room I will just write the p value here so p = .001.3407

Is that p value smaller than this alpha?3416

Yes, indeed.3420

What can we say?3421

We can reject the null.3424

What can I conclude from this data?3426

I can say that this data shows that it is very unlikely to come from the same population as pure water.3430

The freezing point of water will have a variation.3445

It will have some probability of not being exactly 32 and this deviation on the negative side is much greater than would be expected by chance.3449

Let us see.3461

Example 2, the heights of women in the United States are approximately normally distributed with a mean of 64.8 in.3465

The heights of 11 players on a recent roster of the WNBA team are these in inches.3472

Is there sufficient evidence to say that this sample is so much taller than the population that3479

this difference cannot reasonably be attributed to chance alone?3485

Let us do some hypothesis testing.3489

Here our null hypothesis is that our sample is just like regular women.3493

The mean is 64.8.3500

I am going to use a two tailed alternative here, is that they are not like this population.3504

We can probably guess by using common sense that they are on average taller, but we will do a two-tailed test.3514

It is actually more conservative.3522

It is safer to go with that two tailed test.3525

Here we will make alpha=.05 and it will be two-tailed.3527

Let us draw the SDOM here.3536

Here we might draw these boundaries and because it is two tailed this is .025 .025 and here it is .95.3542

All together it adds up to .1.3565

Now that we have this can we figure out the t?3568

In order to figure out the t, we need to have the degrees of freedom.3575

If you go to the download and go to example 2, I have listed this data here for you and we can actually find the degrees of freedom here.3579

Here I put step 3 so that we know where we are.3590

In step 3, we need degrees of freedom and that would be count of all of these guys -1.3596

We have 11 players 10° of freedom.3606

Let us find the critical t.3610

The critical t would be t inverse because we know the two tailed probability .05 and the degrees of freedom.3613

That gives us the positive critical t.3626

That is 2.23 and -2.23 those are our critical boundaries and anything outside of that, we reject the null.3629

Let us go to step 4.3640

In step 4 we can start dealing with the sample.3643

Let us figure out the sample t in order to do that we need the x bar - the mu ÷ standard error.3646

We need to know the samples average x bar.3656

We also need to know mu and we also need to know standard error.3663

Standard errors is going to be s ÷ √n.3669

I need to write these things down because it helps me figure out what we need.3674

It is like a shopping list.3679

Here I need s.3680

Now that I have written all these things down I can just calculate them.3684

I need the average and mu which I already know from the problem 64.8.3688

I need to get my standard error but before I do that I need to get s standard deviation3709

and 1 standard deviation I can take that and ÷ the square root of n which is 11.3718

That is my standard error and once I have all of these ingredients, I can assemble my t which is x bar – mu ÷ standard error.3730

I get 7.97 and that is way higher than 2.2.3746

I am pretty sure I can step 5, reject the null.3755

If I go back to my problem, then let me see is there is sufficient evidence to say that this sample is so much taller than the population,3763

that this difference cannot be reasonably attributed to chance alone.3776

I should say yes because when you are way out here, your probability that you belong to this chance distribution is small3780

that it is reasonable for us to say that the sample came from a different population.3793

Final example, select the best way to complete the sentence.3802

The probability that the null hypothesis is true, that is a false alarm rate.3810

It is when the null hypothesis is true, but also it is not just that.3824

It is not just the possibility that the null hypothesis is true it is that given that you have a particular sample it seems to leave some information.3835

It is not quite complete, but it is not entirely false.3850

It is just that it does not have the whole truth.3856

It does not have the condition.3859

Given that you have this particular sample value, the probability that the null hypothesis is false, that is not true.3861

Even if you just remember this.3870

Remember this column was null is true.3873

Alpha is the set one but the p ones are the ones in there.3877

That is just not true.3885

The probability that an alternative hypothesis is true.3889

Actually, we have not talked about that at all.3895

We only talked about having a very low possibility that the null hypothesis is true,3898

but we have not talked about increasing the probability that the alternative hypothesis is true.3905

Beside why would you reject the null when you have a really small t value?3910

A small possibility that the alternative hypothesis is true that does not make sense.3915

What about the probability of seeing a sample t as extreme as the one given that the null hypothesis is true.3921

This is our entire story I can process it now.3934

It is not just that the null hypothesis is true, but it also that when you have a certain sample, that also has to be part of the definition of p value.3938

The idea is if we have this t value and it is pretty extreme and the null hypothesis is true.3956

That is given.3967

Given that the null hypothesis is true, what is the possibility of seeing such extreme t value?3968

It is very small.3979

We are trying to lower our false alarm rate.3981

That is the end of one sample hypothesis testing.3986

Educator®

Please sign in to participate in this lecture discussion.

Resetting Your Password?
OR

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Membership Overview

  • Available 24/7. Unlimited Access to Our Entire Library.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lecture slides for taking notes.
  • Track your course viewing progress.
  • Accessible anytime, anywhere with our Android and iOS apps.