×
Start learning today, and be successful in your academic & professional career. Start Today!  Dr. Ji Son

t Distributions

Slide Duration:

Section 1: Introduction
Descriptive Statistics vs. Inferential Statistics

25m 31s

Intro
0:00
0:10
0:11
Statistics
0:35
Statistics
0:36
Let's Think About High School Science
1:12
Measurement and Find Patterns (Mathematical Formula)
1:13
Statistics = Math of Distributions
4:58
Distributions
4:59
Problematic… but also GREAT
5:58
Statistics
7:33
How is It Different from Other Specializations in Mathematics?
7:34
Statistics is Fundamental in Natural and Social Sciences
7:53
Two Skills of Statistics
8:20
Description (Exploration)
8:21
Inference
9:13
Descriptive Statistics vs. Inferential Statistics: Apply to Distributions
9:58
Descriptive Statistics
9:59
Inferential Statistics
11:05
Populations vs. Samples
12:19
Populations vs. Samples: Is it the Truth?
12:20
Populations vs. Samples: Pros & Cons
13:36
Populations vs. Samples: Descriptive Values
16:12
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:10
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:11
Example 1: Descriptive Statistics vs. Inferential Statistics
19:09
Example 2: Descriptive Statistics vs. Inferential Statistics
20:47
Example 3: Sample, Parameter, Population, and Statistic
21:40
Example 4: Sample, Parameter, Population, and Statistic
23:28
Section 2: About Samples: Cases, Variables, Measurements

32m 14s

Intro
0:00
Data
0:09
Data, Cases, Variables, and Values
0:10
Rows, Columns, and Cells
2:03
Example: Aircrafts
3:52
How Do We Get Data?
5:38
Research: Question and Hypothesis
5:39
Research Design
7:11
Measurement
7:29
Research Analysis
8:33
Research Conclusion
9:30
Types of Variables
10:03
Discrete Variables
10:04
Continuous Variables
12:07
Types of Measurements
14:17
Types of Measurements
14:18
Types of Measurements (Scales)
17:22
Nominal
17:23
Ordinal
19:11
Interval
21:33
Ratio
24:24
Example 1: Cases, Variables, Measurements
25:20
Example 2: Which Scale of Measurement is Used?
26:55
Example 3: What Kind of a Scale of Measurement is This?
27:26
Example 4: Discrete vs. Continuous Variables.
30:31
Section 3: Visualizing Distributions
Introduction to Excel

8m 9s

Intro
0:00
Before Visualizing Distribution
0:10
Excel
0:11
Excel: Organization
0:45
Workbook
0:46
Column x Rows
1:50
Tools: Menu Bar, Standard Toolbar, and Formula Bar
3:00
Excel + Data
6:07
Exce and Data
6:08
Frequency Distributions in Excel

39m 10s

Intro
0:00
0:08
Data in Excel and Frequency Distributions
0:09
Raw Data to Frequency Tables
0:42
Raw Data to Frequency Tables
0:43
Frequency Tables: Using Formulas and Pivot Tables
1:28
Example 1: Number of Births
7:17
Example 2: Age Distribution
20:41
Example 3: Height Distribution
27:45
Example 4: Height Distribution of Males
32:19
Frequency Distributions and Features

25m 29s

Intro
0:00
0:10
Data in Excel, Frequency Distributions, and Features of Frequency Distributions
0:11
Example #1
1:35
Uniform
1:36
Example #2
2:58
Unimodal, Skewed Right, and Asymmetric
2:59
Example #3
6:29
Bimodal
6:30
Example #4a
8:29
Symmetric, Unimodal, and Normal
8:30
Point of Inflection and Standard Deviation
11:13
Example #4b
12:43
Normal Distribution
12:44
Summary
13:56
Uniform, Skewed, Bimodal, and Normal
13:57
17:34
Sketch Problem 2: Life Expectancy
20:01
Sketch Problem 3: Telephone Numbers
22:01
Sketch Problem 4: Length of Time Used to Complete a Final Exam
23:43
Dotplots and Histograms in Excel

42m 42s

Intro
0:00
0:06
0:07
Previously
1:02
Data, Frequency Table, and visualization
1:03
Dotplots
1:22
Dotplots Excel Example
1:23
Dotplots: Pros and Cons
7:22
Pros and Cons of Dotplots
7:23
Dotplots Excel Example Cont.
9:07
Histograms
12:47
Histograms Overview
12:48
Example of Histograms
15:29
Histograms: Pros and Cons
31:39
Pros
31:40
Cons
32:31
Frequency vs. Relative Frequency
32:53
Frequency
32:54
Relative Frequency
33:36
Example 1: Dotplots vs. Histograms
34:36
Example 2: Age of Pennies Dotplot
36:21
Example 3: Histogram of Mammal Speeds
38:27
Example 4: Histogram of Life Expectancy
40:30
Stemplots

12m 23s

Intro
0:00
0:05
0:06
What Sets Stemplots Apart?
0:46
Data Sets, Dotplots, Histograms, and Stemplots
0:47
Example 1: What Do Stemplots Look Like?
1:58
Example 2: Back-to-Back Stemplots
5:00
7:46
Example 4: Quiz Grade & Afterschool Tutoring Stemplot
9:56
Bar Graphs

22m 49s

Intro
0:00
0:05
0:08
Review of Frequency Distributions
0:44
Y-axis and X-axis
0:45
Types of Frequency Visualizations Covered so Far
2:16
Introduction to Bar Graphs
4:07
Example 1: Bar Graph
5:32
Example 1: Bar Graph
5:33
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:07
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:08
Example 2: Create a Frequency Visualization for Gender
14:02
Example 3: Cases, Variables, and Frequency Visualization
16:34
Example 4: What Kind of Graphs are Shown Below?
19:29
Section 4: Summarizing Distributions
Central Tendency: Mean, Median, Mode

38m 50s

Intro
0:00
0:07
0:08
Central Tendency 1
0:56
Way to Summarize a Distribution of Scores
0:57
Mode
1:32
Median
2:02
Mean
2:36
Central Tendency 2
3:47
Mode
3:48
Median
4:20
Mean
5:25
Summation Symbol
6:11
Summation Symbol
6:12
Population vs. Sample
10:46
Population vs. Sample
10:47
Excel Examples
15:08
Finding Mode, Median, and Mean in Excel
15:09
Median vs. Mean
21:45
Effect of Outliers
21:46
Relationship Between Parameter and Statistic
22:44
Type of Measurements
24:00
Which Distributions to Use With
24:55
Example 1: Mean
25:30
Example 2: Using Summation Symbol
29:50
Example 3: Average Calorie Count
32:50
Example 4: Creating an Example Set
35:46
Variability

42m 40s

Intro
0:00
0:05
0:06
0:45
0:46
5:45
5:46
Range, Quartiles and Interquartile Range
6:37
Range
6:38
Interquartile Range
8:42
Interquartile Range Example
10:58
Interquartile Range Example
10:59
Variance and Standard Deviation
12:27
Deviations
12:28
Sum of Squares
14:35
Variance
16:55
Standard Deviation
17:44
Sum of Squares (SS)
18:34
Sum of Squares (SS)
18:35
Population vs. Sample SD
22:00
Population vs. Sample SD
22:01
Population vs. Sample
23:20
Mean
23:21
SD
23:51
Example 1: Find the Mean and Standard Deviation of the Variable Friends in the Excel File
27:21
Example 2: Find the Mean and Standard Deviation of the Tagged Photos in the Excel File
35:25
Example 3: Sum of Squares
38:58
Example 4: Standard Deviation
41:48
Five Number Summary & Boxplots

57m 15s

Intro
0:00
0:06
0:07
Summarizing Distributions
0:37
0:38
5 Number Summary
1:14
Boxplot: Visualizing 5 Number Summary
3:37
Boxplot: Visualizing 5 Number Summary
3:38
Boxplots on Excel
9:01
Using 'Stocks' and Using Stacked Columns
9:02
Boxplots on Excel Example
10:14
When are Boxplots Useful?
32:14
Pros
32:15
Cons
32:59
How to Determine Outlier Status
33:24
Rule of Thumb: Upper Limit
33:25
Rule of Thumb: Lower Limit
34:16
Signal Outliers in an Excel Data File Using Conditional Formatting
34:52
Modified Boxplot
48:38
Modified Boxplot
48:39
Example 1: Percentage Values & Lower and Upper Whisker
49:10
Example 2: Boxplot
50:10
Example 3: Estimating IQR From Boxplot
53:46
Example 4: Boxplot and Missing Whisker
54:35
Shape: Calculating Skewness & Kurtosis

41m 51s

Intro
0:00
0:16
0:17
Skewness Concept
1:09
Skewness Concept
1:10
Calculating Skewness
3:26
Calculating Skewness
3:27
Interpreting Skewness
7:36
Interpreting Skewness
7:37
Excel Example
8:49
Kurtosis Concept
20:29
Kurtosis Concept
20:30
Calculating Kurtosis
24:17
Calculating Kurtosis
24:18
Interpreting Kurtosis
29:01
Leptokurtic
29:35
Mesokurtic
30:10
Platykurtic
31:06
Excel Example
32:04
Example 1: Shape of Distribution
38:28
Example 2: Shape of Distribution
39:29
Example 3: Shape of Distribution
40:14
Example 4: Kurtosis
41:10
Normal Distribution

34m 33s

Intro
0:00
0:13
0:14
What is a Normal Distribution
0:44
The Normal Distribution As a Theoretical Model
0:45
Possible Range of Probabilities
3:05
Possible Range of Probabilities
3:06
What is a Normal Distribution
5:07
Can Be Described By
5:08
Properties
5:49
'Same' Shape: Illusion of Different Shape!
7:35
'Same' Shape: Illusion of Different Shape!
7:36
Types of Problems
13:45
Example: Distribution of SAT Scores
13:46
Shape Analogy
19:48
Shape Analogy
19:49
Example 1: The Standard Normal Distribution and Z-Scores
22:34
Example 2: The Standard Normal Distribution and Z-Scores
25:54
Example 3: Sketching and Normal Distribution
28:55
Example 4: Sketching and Normal Distribution
32:32
Standard Normal Distributions & Z-Scores

41m 44s

Intro
0:00
0:06
0:07
A Family of Distributions
0:28
Infinite Set of Distributions
0:29
Transforming Normal Distributions to 'Standard' Normal Distribution
1:04
Normal Distribution vs. Standard Normal Distribution
2:58
Normal Distribution vs. Standard Normal Distribution
2:59
Z-Score, Raw Score, Mean, & SD
4:08
Z-Score, Raw Score, Mean, & SD
4:09
Weird Z-Scores
9:40
Weird Z-Scores
9:41
Excel
16:45
For Normal Distributions
16:46
For Standard Normal Distributions
19:11
Excel Example
20:24
Types of Problems
25:18
Percentage Problem: P(x)
25:19
Raw Score and Z-Score Problems
26:28
Standard Deviation Problems
27:01
Shape Analogy
27:44
Shape Analogy
27:45
Example 1: Deaths Due to Heart Disease vs. Deaths Due to Cancer
28:24
Example 2: Heights of Male College Students
33:15
Example 3: Mean and Standard Deviation
37:14
Example 4: Finding Percentage of Values in a Standard Normal Distribution
37:49
Normal Distribution: PDF vs. CDF

55m 44s

Intro
0:00
0:15
0:16
Frequency vs. Cumulative Frequency
0:56
Frequency vs. Cumulative Frequency
0:57
Frequency vs. Cumulative Frequency
4:32
Frequency vs. Cumulative Frequency Cont.
4:33
Calculus in Brief
6:21
Derivative-Integral Continuum
6:22
PDF
10:08
PDF for Standard Normal Distribution
10:09
PDF for Normal Distribution
14:32
Integral of PDF = CDF
21:27
Integral of PDF = CDF
21:28
Example 1: Cumulative Frequency Graph
23:31
Example 2: Mean, Standard Deviation, and Probability
24:43
Example 3: Mean and Standard Deviation
35:50
Example 4: Age of Cars
49:32
Section 5: Linear Regression
Scatterplots

47m 19s

Intro
0:00
0:04
0:05
Previous Visualizations
0:30
Frequency Distributions
0:31
Compare & Contrast
2:26
Frequency Distributions Vs. Scatterplots
2:27
Summary Values
4:53
Shape
4:54
Center & Trend
6:41
8:22
Univariate & Bivariate
10:25
Example Scatterplot
10:48
Shape, Trend, and Strength
10:49
Positive and Negative Association
14:05
Positive and Negative Association
14:06
Linearity, Strength, and Consistency
18:30
Linearity
18:31
Strength
19:14
Consistency
20:40
Summarizing a Scatterplot
22:58
Summarizing a Scatterplot
22:59
Example 1: Gapminder.org, Income x Life Expectancy
26:32
Example 2: Gapminder.org, Income x Infant Mortality
36:12
Example 3: Trend and Strength of Variables
40:14
Example 4: Trend, Strength and Shape for Scatterplots
43:27
Regression

32m 2s

Intro
0:00
0:05
0:06
Linear Equations
0:34
Linear Equations: y = mx + b
0:35
Rough Line
5:16
Rough Line
5:17
Regression - A 'Center' Line
7:41
Reasons for Summarizing with a Regression Line
7:42
Predictor and Response Variable
10:04
Goal of Regression
12:29
Goal of Regression
12:30
Prediction
14:50
Example: Servings of Mile Per Year Shown By Age
14:51
Intrapolation
17:06
Extrapolation
17:58
Error in Prediction
20:34
Prediction Error
20:35
Residual
21:40
Example 1: Residual
23:34
Example 2: Large and Negative Residual
26:30
Example 3: Positive Residual
28:13
Example 4: Interpret Regression Line & Extrapolate
29:40
Least Squares Regression

56m 36s

Intro
0:00
0:13
0:14
Best Fit
0:47
Best Fit
0:48
Sum of Squared Errors (SSE)
1:50
Sum of Squared Errors (SSE)
1:51
Why Squared?
3:38
Why Squared?
3:39
Quantitative Properties of Regression Line
4:51
Quantitative Properties of Regression Line
4:52
So How do we Find Such a Line?
6:49
SSEs of Different Line Equations & Lowest SSE
6:50
Carl Gauss' Method
8:01
How Do We Find Slope (b1)
11:00
How Do We Find Slope (b1)
11:01
Hoe Do We Find Intercept
15:11
Hoe Do We Find Intercept
15:12
Example 1: Which of These Equations Fit the Above Data Best?
17:18
Example 2: Find the Regression Line for These Data Points and Interpret It
26:31
Example 3: Summarize the Scatterplot and Find the Regression Line.
34:31
Example 4: Examine the Mean of Residuals
43:52
Correlation

43m 58s

Intro
0:00
0:05
0:06
Summarizing a Scatterplot Quantitatively
0:47
Shape
0:48
Trend
1:11
Strength: Correlation ®
1:45
Correlation Coefficient ( r )
2:30
Correlation Coefficient ( r )
2:31
Trees vs. Forest
11:59
Trees vs. Forest
12:00
Calculating r
15:07
Average Product of z-scores for x and y
15:08
Relationship between Correlation and Slope
21:10
Relationship between Correlation and Slope
21:11
Example 1: Find the Correlation between Grams of Fat and Cost
24:11
Example 2: Relationship between r and b1
30:24
Example 3: Find the Regression Line
33:35
Example 4: Find the Correlation Coefficient for this Set of Data
37:37
Correlation: r vs. r-squared

52m 52s

Intro
0:00
0:07
0:08
R-squared
0:44
What is the Meaning of It? Why Squared?
0:45
Parsing Sum of Squared (Parsing Variability)
2:25
SST = SSR + SSE
2:26
What is SST and SSE?
7:46
What is SST and SSE?
7:47
r-squared
18:33
Coefficient of Determination
18:34
If the Correlation is Strong…
20:25
If the Correlation is Strong…
20:26
If the Correlation is Weak…
22:36
If the Correlation is Weak…
22:37
Example 1: Find r-squared for this Set of Data
23:56
Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance?
33:54
Example 3: Why Does r-squared Only Range from 0 to 1
37:29
Example 4: Find the r-squared for This Set of Data
39:55
Transformations of Data

27m 8s

Intro
0:00
0:05
0:06
Why Transform?
0:26
Why Transform?
0:27
Shape-preserving vs. Shape-changing Transformations
5:14
Shape-preserving = Linear Transformations
5:15
Shape-changing Transformations = Non-linear Transformations
6:20
Common Shape-Preserving Transformations
7:08
Common Shape-Preserving Transformations
7:09
Common Shape-Changing Transformations
8:59
Powers
9:00
Logarithms
9:39
Change Just One Variable? Both?
10:38
Log-log Transformations
10:39
Log Transformations
14:38
Example 1: Create, Graph, and Transform the Data Set
15:19
Example 2: Create, Graph, and Transform the Data Set
20:08
Example 3: What Kind of Model would You Choose for this Data?
22:44
Example 4: Transformation of Data
25:46
Section 6: Collecting Data in an Experiment
Sampling & Bias

54m 44s

Intro
0:00
0:05
0:06
Descriptive vs. Inferential Statistics
1:04
Descriptive Statistics: Data Exploration
1:05
Example
2:03
To tackle Generalization…
4:31
Generalization
4:32
Sampling
6:06
'Good' Sample
6:40
Defining Samples and Populations
8:55
Population
8:56
Sample
11:16
Why Use Sampling?
13:09
Why Use Sampling?
13:10
Goal of Sampling: Avoiding Bias
15:04
What is Bias?
15:05
Where does Bias Come from: Sampling Bias
17:53
Where does Bias Come from: Response Bias
18:27
Sampling Bias: Bias from Bas Sampling Methods
19:34
Size Bias
19:35
Voluntary Response Bias
21:13
Convenience Sample
22:22
Judgment Sample
23:58
25:40
Response Bias: Bias from 'Bad' Data Collection Methods
28:00
Nonresponse Bias
29:31
Questionnaire Bias
31:10
Incorrect Response or Measurement Bias
37:32
Example 1: What Kind of Biases?
40:29
Example 2: What Biases Might Arise?
44:46
Example 3: What Kind of Biases?
48:34
Example 4: What Kind of Biases?
51:43
Sampling Methods

14m 25s

Intro
0:00
0:05
0:06
Biased vs. Unbiased Sampling Methods
0:32
Biased Sampling
0:33
Unbiased Sampling
1:13
Probability Sampling Methods
2:31
Simple Random
2:54
Stratified Random Sampling
4:06
Cluster Sampling
5:24
Two-staged Sampling
6:22
Systematic Sampling
7:25
8:33
Example 2: Describe How to Take a Two-Stage Sample from this Book
10:16
Example 3: Sampling Methods
11:58
Example 4: Cluster Sample Plan
12:48
Research Design

53m 54s

Intro
0:00
0:06
0:07
Descriptive vs. Inferential Statistics
0:51
Descriptive Statistics: Data Exploration
0:52
Inferential Statistics
1:02
Variables and Relationships
1:44
Variables
1:45
Relationships
2:49
Not Every Type of Study is an Experiment…
4:16
Category I - Descriptive Study
4:54
Category II - Correlational Study
5:50
Category III - Experimental, Quasi-experimental, Non-experimental
6:33
Category III
7:42
Experimental, Quasi-experimental, and Non-experimental
7:43
Why CAN'T the Other Strategies Determine Causation?
10:18
Third-variable Problem
10:19
Directionality Problem
15:49
What Makes Experiments Special?
17:54
Manipulation
17:55
Control (and Comparison)
21:58
Methods of Control
26:38
Holding Constant
26:39
Matching
29:11
Random Assignment
31:48
Experiment Terminology
34:09
'true' Experiment vs. Study
34:10
Independent Variable (IV)
35:16
Dependent Variable (DV)
35:45
Factors
36:07
Treatment Conditions
36:23
Levels
37:43
Confounds or Extraneous Variables
38:04
Blind
38:38
Blind Experiments
38:39
Double-blind Experiments
39:29
How Categories Relate to Statistics
41:35
Category I - Descriptive Study
41:36
Category II - Correlational Study
42:05
Category III - Experimental, Quasi-experimental, Non-experimental
42:43
Example 1: Research Design
43:50
Example 2: Research Design
47:37
Example 3: Research Design
50:12
Example 4: Research Design
52:00
Between and Within Treatment Variability

41m 31s

Intro
0:00
0:06
0:07
Experimental Designs
0:51
Experimental Designs: Manipulation & Control
0:52
Two Types of Variability
2:09
Between Treatment Variability
2:10
Within Treatment Variability
3:31
Updated Goal of Experimental Design
5:47
Updated Goal of Experimental Design
5:48
Example: Drugs and Driving
6:56
Example: Drugs and Driving
6:57
Different Types of Random Assignment
11:27
All Experiments
11:28
Completely Random Design
12:02
Randomized Block Design
13:19
Randomized Block Design
15:48
Matched Pairs Design
15:49
Repeated Measures Design
19:47
Between-subject Variable vs. Within-subject Variable
22:43
Completely Randomized Design
22:44
Repeated Measures Design
25:03
Example 1: Design a Completely Random, Matched Pair, and Repeated Measures Experiment
26:16
Example 2: Block Design
31:41
Example 3: Completely Randomized Designs
35:11
Example 4: Completely Random, Matched Pairs, or Repeated Measures Experiments?
39:01
Section 7: Review of Probability Axioms
Sample Spaces

37m 52s

Intro
0:00
0:07
0:08
Why is Probability Involved in Statistics
0:48
Probability
0:49
Can People Tell the Difference between Cheap and Gourmet Coffee?
2:08
Taste Test with Coffee Drinkers
3:37
If No One can Actually Taste the Difference
3:38
If Everyone can Actually Taste the Difference
5:36
Creating a Probability Model
7:09
Creating a Probability Model
7:10
D'Alembert vs. Necker
9:41
D'Alembert vs. Necker
9:42
Problem with D'Alembert's Model
13:29
Problem with D'Alembert's Model
13:30
Covering Entire Sample Space
15:08
Fundamental Principle of Counting
15:09
Where Do Probabilities Come From?
22:54
Observed Data, Symmetry, and Subjective Estimates
22:55
Checking whether Model Matches Real World
24:27
Law of Large Numbers
24:28
Example 1: Law of Large Numbers
27:46
Example 2: Possible Outcomes
30:43
Example 3: Brands of Coffee and Taste
33:25
Example 4: How Many Different Treatments are there?
35:33

20m 29s

Intro
0:00
0:08
0:09
Disjoint Events
0:41
Disjoint Events
0:42
Meaning of 'or'
2:39
In Regular Life
2:40
In Math/Statistics/Computer Science
3:10
3:55
If A and B are Disjoint: P (A and B)
3:56
If A and B are Disjoint: P (A or B)
5:15
5:41
5:42
8:31
If A and B are not Disjoint: P (A or B)
8:32
Example 1: Which of These are Mutually Exclusive?
10:50
Example 2: What is the Probability that You will Have a Combination of One Heads and Two Tails?
12:57
Example 3: Engagement Party
15:17
Example 4: Home Owner's Insurance
18:30
Conditional Probability

57m 19s

Intro
0:00
0:05
0:06
'or' vs. 'and' vs. Conditional Probability
1:07
'or' vs. 'and' vs. Conditional Probability
1:08
'and' vs. Conditional Probability
5:57
P (M or L)
5:58
P (M and L)
8:41
P (M|L)
11:04
P (L|M)
12:24
Tree Diagram
15:02
Tree Diagram
15:03
Defining Conditional Probability
22:42
Defining Conditional Probability
22:43
Common Contexts for Conditional Probability
30:56
Medical Testing: Positive Predictive Value
30:57
Medical Testing: Sensitivity
33:03
Statistical Tests
34:27
Example 1: Drug and Disease
36:41
Example 2: Marbles and Conditional Probability
40:04
Example 3: Cards and Conditional Probability
45:59
Example 4: Votes and Conditional Probability
50:21
Independent Events

24m 27s

Intro
0:00
0:05
0:06
Independent Events & Conditional Probability
0:26
Non-independent Events
0:27
Independent Events
2:00
Non-independent and Independent Events
3:08
Non-independent and Independent Events
3:09
Defining Independent Events
5:52
Defining Independent Events
5:53
Multiplication Rule
7:29
Previously…
7:30
But with Independent Evens
8:53
Example 1: Which of These Pairs of Events are Independent?
11:12
Example 2: Health Insurance and Probability
15:12
Example 3: Independent Events
17:42
Example 4: Independent Events
20:03
Section 8: Probability Distributions
Introduction to Probability Distributions

56m 45s

Intro
0:00
0:08
0:09
Sampling vs. Probability
0:57
Sampling
0:58
Missing
1:30
What is Missing?
3:06
Insight: Probability Distributions
5:26
Insight: Probability Distributions
5:27
What is a Probability Distribution?
7:29
From Sample Spaces to Probability Distributions
8:44
Sample Space
8:45
Probability Distribution of the Sum of Two Die
11:16
The Random Variable
17:43
The Random Variable
17:44
Expected Value
21:52
Expected Value
21:53
Example 1: Probability Distributions
28:45
Example 2: Probability Distributions
35:30
Example 3: Probability Distributions
43:37
Example 4: Probability Distributions
47:20
Expected Value & Variance of Probability Distributions

53m 41s

Intro
0:00
0:06
0:07
Discrete vs. Continuous Random Variables
1:04
Discrete vs. Continuous Random Variables
1:05
Mean and Variance Review
4:44
Mean: Sample, Population, and Probability Distribution
4:45
Variance: Sample, Population, and Probability Distribution
9:12
Example Situation
14:10
Example Situation
14:11
Some Special Cases…
16:13
Some Special Cases…
16:14
Linear Transformations
19:22
Linear Transformations
19:23
What Happens to Mean and Variance of the Probability Distribution?
20:12
n Independent Values of X
25:38
n Independent Values of X
25:39
Compare These Two Situations
30:56
Compare These Two Situations
30:57
Two Random Variables, X and Y
32:02
Two Random Variables, X and Y
32:03
Example 1: Expected Value & Variance of Probability Distributions
35:35
Example 2: Expected Values & Standard Deviation
44:17
Example 3: Expected Winnings and Standard Deviation
48:18
Binomial Distribution

55m 15s

Intro
0:00
0:05
0:06
Discrete Probability Distributions
1:42
Discrete Probability Distributions
1:43
Binomial Distribution
2:36
Binomial Distribution
2:37
Multiplicative Rule Review
6:54
Multiplicative Rule Review
6:55
How Many Outcomes with k 'Successes'
10:23
Adults and Bachelor's Degree: Manual List of Outcomes
10:24
P (X=k)
19:37
Putting Together # of Outcomes with the Multiplicative Rule
19:38
Expected Value and Standard Deviation in a Binomial Distribution
25:22
Expected Value and Standard Deviation in a Binomial Distribution
25:23
Example 1: Coin Toss
33:42
38:03
Example 3: Types of Blood and Probability
45:39
Example 4: Expected Number and Standard Deviation
51:11
Section 9: Sampling Distributions of Statistics
Introduction to Sampling Distributions

48m 17s

Intro
0:00
0:08
0:09
Probability Distributions vs. Sampling Distributions
0:55
Probability Distributions vs. Sampling Distributions
0:56
Same Logic
3:55
Logic of Probability Distribution
3:56
Example: Rolling Two Die
6:56
Simulating Samples
9:53
To Come Up with Probability Distributions
9:54
In Sampling Distributions
11:12
Connecting Sampling and Research Methods with Sampling Distributions
12:11
Connecting Sampling and Research Methods with Sampling Distributions
12:12
Simulating a Sampling Distribution
14:14
Experimental Design: Regular Sleep vs. Less Sleep
14:15
Logic of Sampling Distributions
23:08
Logic of Sampling Distributions
23:09
General Method of Simulating Sampling Distributions
25:38
General Method of Simulating Sampling Distributions
25:39
Questions that Remain
28:45
Questions that Remain
28:46
Example 1: Mean and Standard Error of Sampling Distribution
30:57
Example 2: What is the Best Way to Describe Sampling Distributions?
37:12
Example 3: Matching Sampling Distributions
38:21
Example 4: Mean and Standard Error of Sampling Distribution
41:51
Sampling Distribution of the Mean

1h 8m 48s

Intro
0:00
0:05
0:06
Special Case of General Method for Simulating a Sampling Distribution
1:53
Special Case of General Method for Simulating a Sampling Distribution
1:54
Computer Simulation
3:43
Using Simulations to See Principles behind Shape of SDoM
15:50
Using Simulations to See Principles behind Shape of SDoM
15:51
Conditions
17:38
Using Simulations to See Principles behind Center (Mean) of SDoM
20:15
Using Simulations to See Principles behind Center (Mean) of SDoM
20:16
Conditions: Does n Matter?
21:31
Conditions: Does Number of Simulation Matter?
24:37
Using Simulations to See Principles behind Standard Deviation of SDoM
27:13
Using Simulations to See Principles behind Standard Deviation of SDoM
27:14
Conditions: Does n Matter?
34:45
Conditions: Does Number of Simulation Matter?
36:24
Central Limit Theorem
37:13
SHAPE
38:08
CENTER
39:34
39:52
Comparing Population, Sample, and SDoM
43:10
Comparing Population, Sample, and SDoM
43:11
48:24
What Happens When We Don't Know What the Population Looks Like?
48:25
Can We Have Sampling Distributions for Summary Statistics Other than the Mean?
49:42
How Do We Know whether a Sample is Sufficiently Unlikely?
53:36
Do We Always Have to Simulate a Large Number of Samples in Order to get a Sampling Distribution?
54:40
Example 1: Mean Batting Average
55:25
Example 2: Mean Sampling Distribution and Standard Error
59:07
Example 3: Sampling Distribution of the Mean
1:01:04
Sampling Distribution of Sample Proportions

54m 37s

Intro
0:00
0:06
0:07
Intro to Sampling Distribution of Sample Proportions (SDoSP)
0:51
Categorical Data (Examples)
0:52
Wish to Estimate Proportion of Population from Sample…
2:00
Notation
3:34
Population Proportion and Sample Proportion Notations
3:35
What's the Difference?
9:19
SDoM vs. SDoSP: Type of Data
9:20
SDoM vs. SDoSP: Shape
11:24
SDoM vs. SDoSP: Center
12:30
15:34
Binomial Distribution vs. Sampling Distribution of Sample Proportions
19:14
Binomial Distribution vs. SDoSP: Type of Data
19:17
Binomial Distribution vs. SDoSP: Shape
21:07
Binomial Distribution vs. SDoSP: Center
21:43
24:08
Example 1: Sampling Distribution of Sample Proportions
26:07
Example 2: Sampling Distribution of Sample Proportions
37:58
Example 3: Sampling Distribution of Sample Proportions
44:42
Example 4: Sampling Distribution of Sample Proportions
45:57
Section 10: Inferential Statistics
Introduction to Confidence Intervals

42m 53s

Intro
0:00
0:06
0:07
Inferential Statistics
0:50
Inferential Statistics
0:51
Two Problems with This Picture…
3:20
Two Problems with This Picture…
3:21
Solution: Confidence Intervals (CI)
4:59
Solution: Hypotheiss Testing (HT)
5:49
Which Parameters are Known?
6:45
Which Parameters are Known?
6:46
Confidence Interval - Goal
7:56
When We Don't Know m but know s
7:57
When We Don't Know
18:27
When We Don't Know m nor s
18:28
Example 1: Confidence Intervals
26:18
Example 2: Confidence Intervals
29:46
Example 3: Confidence Intervals
32:18
Example 4: Confidence Intervals
38:31
t Distributions

1h 2m 6s

Intro
0:00
0:04
0:05
When to Use z vs. t?
1:07
When to Use z vs. t?
1:08
What is z and t?
3:02
z-score and t-score: Commonality
3:03
z-score and t-score: Formulas
3:34
z-score and t-score: Difference
5:22
Why not z? (Why t?)
7:24
Why not z? (Why t?)
7:25
But Don't Worry!
15:13
Gossett and t-distributions
15:14
Rules of t Distributions
17:05
t-distributions are More Normal as n Gets Bigger
17:06
t-distributions are a Family of Distributions
18:55
Degrees of Freedom (df)
20:02
Degrees of Freedom (df)
20:03
t Family of Distributions
24:07
t Family of Distributions : df = 2 , 4, and 60
24:08
df = 60
29:16
df = 2
29:59
How to Find It?
31:01
'Student's t-distribution' or 't-distribution'
31:02
Excel Example
33:06
Example 1: Which Distribution Do You Use? Z or t?
45:26
47:41
Example 3: t Distributions
52:15
Example 4: t Distributions , confidence interval, and mean
55:59
Introduction to Hypothesis Testing

1h 6m 33s

Intro
0:00
0:06
0:07
Issues to Overcome in Inferential Statistics
1:35
Issues to Overcome in Inferential Statistics
1:36
What Happens When We Don't Know What the Population Looks Like?
2:57
How Do We Know whether a sample is Sufficiently Unlikely
3:43
Hypothesizing a Population
6:44
Hypothesizing a Population
6:45
Null Hypothesis
8:07
Alternative Hypothesis
8:56
Hypotheses
11:58
Hypotheses
11:59
Errors in Hypothesis Testing
14:22
Errors in Hypothesis Testing
14:23
Steps of Hypothesis Testing
21:15
Steps of Hypothesis Testing
21:16
Single Sample HT ( When Sigma Available)
26:08
26:09
Step1
27:08
Step 2
27:58
Step 3
28:17
Step 4
32:18
Single Sample HT (When Sigma Not Available)
36:33
36:34
Step1: Hypothesis Testing
36:58
Step 2: Significance Level
37:25
Step 3: Decision Stage
37:40
Step 4: Sample
41:36
Sigma and p-value
45:04
Sigma and p-value
45:05
On tailed vs. Two Tailed Hypotheses
45:51
Example 1: Hypothesis Testing
48:37
Example 2: Heights of Women in the US
57:43
Example 3: Select the Best Way to Complete This Sentence
1:03:23
Confidence Intervals for the Difference of Two Independent Means

55m 14s

Intro
0:00
0:14
0:15
One Mean vs. Two Means
1:17
One Mean vs. Two Means
1:18
Notation
2:41
A Sample! A Set!
2:42
Mean of X, Mean of Y, and Difference of Two Means
3:56
SE of X
4:34
SE of Y
6:28
Sampling Distribution of the Difference between Two Means (SDoD)
7:48
Sampling Distribution of the Difference between Two Means (SDoD)
7:49
Rules of the SDoD (similar to CLT!)
15:00
Mean for the SDoD Null Hypothesis
15:01
Standard Error
17:39
When can We Construct a CI for the Difference between Two Means?
21:28
Three Conditions
21:29
Finding CI
23:56
One Mean CI
23:57
Two Means CI
25:45
Finding t
29:16
Finding t
29:17
Interpreting CI
30:25
Interpreting CI
30:26
Better Estimate of s (s pool)
34:15
Better Estimate of s (s pool)
34:16
Example 1: Confidence Intervals
42:32
Example 2: SE of the Difference
52:36
Hypothesis Testing for the Difference of Two Independent Means

50m

Intro
0:00
0:06
0:07
The Goal of Hypothesis Testing
0:56
One Sample and Two Samples
0:57
Sampling Distribution of the Difference between Two Means (SDoD)
3:42
Sampling Distribution of the Difference between Two Means (SDoD)
3:43
Rules of the SDoD (Similar to CLT!)
6:46
Shape
6:47
Mean for the Null Hypothesis
7:26
Standard Error for Independent Samples (When Variance is Homogenous)
8:18
Standard Error for Independent Samples (When Variance is not Homogenous)
9:25
Same Conditions for HT as for CI
10:08
Three Conditions
10:09
Steps of Hypothesis Testing
11:04
Steps of Hypothesis Testing
11:05
Formulas that Go with Steps of Hypothesis Testing
13:21
Step 1
13:25
Step 2
14:18
Step 3
15:00
Step 4
16:57
Example 1: Hypothesis Testing for the Difference of Two Independent Means
18:47
Example 2: Hypothesis Testing for the Difference of Two Independent Means
33:55
Example 3: Hypothesis Testing for the Difference of Two Independent Means
44:22
Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means

1h 14m 11s

Intro
0:00
0:09
0:10
The Goal of Hypothesis Testing
1:27
One Sample and Two Samples
1:28
Independent Samples vs. Paired Samples
3:16
Independent Samples vs. Paired Samples
3:17
Which is Which?
5:20
Independent SAMPLES vs. Independent VARIABLES
7:43
independent SAMPLES vs. Independent VARIABLES
7:44
T-tests Always…
10:48
T-tests Always…
10:49
Notation for Paired Samples
12:59
Notation for Paired Samples
13:00
Steps of Hypothesis Testing for Paired Samples
16:13
Steps of Hypothesis Testing for Paired Samples
16:14
Rules of the SDoD (Adding on Paired Samples)
18:03
Shape
18:04
Mean for the Null Hypothesis
18:31
Standard Error for Independent Samples (When Variance is Homogenous)
19:25
Standard Error for Paired Samples
20:39
Formulas that go with Steps of Hypothesis Testing
22:59
Formulas that go with Steps of Hypothesis Testing
23:00
Confidence Intervals for Paired Samples
30:32
Confidence Intervals for Paired Samples
30:33
Example 1: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
32:28
Example 2: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
44:02
Example 3: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
52:23
Type I and Type II Errors

31m 27s

Intro
0:00
0:18
0:19
Errors and Relationship to HT and the Sample Statistic?
1:11
Errors and Relationship to HT and the Sample Statistic?
1:12
7:00
One Sample t-test: Friends on Facebook
7:01
Two Sample t-test: Friends on Facebook
13:46
Usually, Lots of Overlap between Null and Alternative Distributions
16:59
Overlap between Null and Alternative Distributions
17:00
How Distributions and 'Box' Fit Together
22:45
How Distributions and 'Box' Fit Together
22:46
Example 1: Types of Errors
25:54
Example 2: Types of Errors
27:30
Example 3: What is the Danger of the Type I Error?
29:38
Effect Size & Power

44m 41s

Intro
0:00
0:05
0:06
Distance between Distributions: Sample t
0:49
Distance between Distributions: Sample t
0:50
Problem with Distance in Terms of Standard Error
2:56
Problem with Distance in Terms of Standard Error
2:57
Test Statistic (t) vs. Effect Size (d or g)
4:38
Test Statistic (t) vs. Effect Size (d or g)
4:39
Rules of Effect Size
6:09
Rules of Effect Size
6:10
Why Do We Need Effect Size?
8:21
Tells You the Practical Significance
8:22
HT can be Deceiving…
10:25
Important Note
10:42
What is Power?
11:20
What is Power?
11:21
Why Do We Need Power?
14:19
Conditional Probability and Power
14:20
Power is:
16:27
Can We Calculate Power?
19:00
Can We Calculate Power?
19:01
How Does Alpha Affect Power?
20:36
How Does Alpha Affect Power?
20:37
How Does Effect Size Affect Power?
25:38
How Does Effect Size Affect Power?
25:39
How Does Variability and Sample Size Affect Power?
27:56
How Does Variability and Sample Size Affect Power?
27:57
How Do We Increase Power?
32:47
Increasing Power
32:48
Example 1: Effect Size & Power
35:40
Example 2: Effect Size & Power
37:38
Example 3: Effect Size & Power
40:55
Section 11: Analysis of Variance
F-distributions

24m 46s

Intro
0:00
0:04
0:05
Z- & T-statistic and Their Distribution
0:34
Z- & T-statistic and Their Distribution
0:35
F-statistic
4:55
The F Ration ( the Variance Ratio)
4:56
F-distribution
12:29
F-distribution
12:30
s and p-value
15:00
s and p-value
15:01
Example 1: Why Does F-distribution Stop At 0 But Go On Until Infinity?
18:33
Example 2: F-distributions
19:29
Example 3: F-distributions and Heights
21:29
ANOVA with Independent Samples

1h 9m 25s

Intro
0:00
0:05
0:06
The Limitations of t-tests
1:12
The Limitations of t-tests
1:13
Two Major Limitations of Many t-tests
3:26
Two Major Limitations of Many t-tests
3:27
Ronald Fisher's Solution… F-test! New Null Hypothesis
4:43
Ronald Fisher's Solution… F-test! New Null Hypothesis (Omnibus Test - One Test to Rule Them All!)
4:44
Analysis of Variance (ANoVA) Notation
7:47
Analysis of Variance (ANoVA) Notation
7:48
Partitioning (Analyzing) Variance
9:58
Total Variance
9:59
Within-group Variation
14:00
Between-group Variation
16:22
Time out: Review Variance & SS
17:05
Time out: Review Variance & SS
17:06
F-statistic
19:22
The F Ratio (the Variance Ratio)
19:23
S²bet = SSbet / dfbet
22:13
What is This?
22:14
How Many Means?
23:20
So What is the dfbet?
23:38
So What is SSbet?
24:15
S²w = SSw / dfw
26:05
What is This?
26:06
How Many Means?
27:20
So What is the dfw?
27:36
So What is SSw?
28:18
Chart of Independent Samples ANOVA
29:25
Chart of Independent Samples ANOVA
29:26
Example 1: Who Uploads More Photos: Unknown Ethnicity, Latino, Asian, Black, or White Facebook Users?
35:52
Hypotheses
35:53
Significance Level
39:40
Decision Stage
40:05
Calculate Samples' Statistic and p-Value
44:10
Reject or Fail to Reject H0
55:54
Example 2: ANOVA with Independent Samples
58:21
Repeated Measures ANOVA

1h 15m 13s

Intro
0:00
0:05
0:06
The Limitations of t-tests
0:36
Who Uploads more Pictures and Which Photo-Type is Most Frequently Used on Facebook?
0:37
ANOVA (F-test) to the Rescue!
5:49
Omnibus Hypothesis
5:50
Analyze Variance
7:27
Independent Samples vs. Repeated Measures
9:12
Same Start
9:13
Independent Samples ANOVA
10:43
Repeated Measures ANOVA
12:00
Independent Samples ANOVA
16:00
Same Start: All the Variance Around Grand Mean
16:01
Independent Samples
16:23
Repeated Measures ANOVA
18:18
Same Start: All the Variance Around Grand Mean
18:19
Repeated Measures
18:33
Repeated Measures F-statistic
21:22
The F Ratio (The Variance Ratio)
21:23
S²bet = SSbet / dfbet
23:07
What is This?
23:08
How Many Means?
23:39
So What is the dfbet?
23:54
So What is SSbet?
24:32
S² resid = SS resid / df resid
25:46
What is This?
25:47
So What is SS resid?
26:44
So What is the df resid?
27:36
SS subj and df subj
28:11
What is This?
28:12
How Many Subject Means?
29:43
So What is df subj?
30:01
So What is SS subj?
30:09
SS total and df total
31:42
What is This?
31:43
What is the Total Number of Data Points?
32:02
So What is df total?
32:34
so What is SS total?
32:47
Chart of Repeated Measures ANOVA
33:19
Chart of Repeated Measures ANOVA: F and Between-samples Variability
33:20
Chart of Repeated Measures ANOVA: Total Variability, Within-subject (case) Variability, Residual Variability
35:50
Example 1: Which is More Prevalent on Facebook: Tagged, Uploaded, Mobile, or Profile Photos?
40:25
Hypotheses
40:26
Significance Level
41:46
Decision Stage
42:09
Calculate Samples' Statistic and p-Value
46:18
Reject or Fail to Reject H0
57:55
Example 2: Repeated Measures ANOVA
58:57
Example 3: What's the Problem with a Bunch of Tiny t-tests?
1:13:59
Section 12: Chi-square Test
Chi-Square Goodness-of-Fit Test

58m 23s

Intro
0:00
0:05
0:06
Where Does the Chi-Square Test Belong?
0:50
Where Does the Chi-Square Test Belong?
0:51
A New Twist on HT: Goodness-of-Fit
7:23
HT in General
7:24
Goodness-of-Fit HT
8:26
12:17
Null Hypothesis
12:18
Alternative Hypothesis
13:23
Example
14:38
Chi-Square Statistic
17:52
Chi-Square Statistic
17:53
Chi-Square Distributions
24:31
Chi-Square Distributions
24:32
Conditions for Chi-Square
28:58
Condition 1
28:59
Condition 2
30:20
Condition 3
30:32
Condition 4
31:47
Example 1: Chi-Square Goodness-of-Fit Test
32:23
Example 2: Chi-Square Goodness-of-Fit Test
44:34
Example 3: Which of These Statements Describe Properties of the Chi-Square Goodness-of-Fit Test?
56:06
Chi-Square Test of Homogeneity

51m 36s

Intro
0:00
0:09
0:10
Goodness-of-Fit vs. Homogeneity
1:13
Goodness-of-Fit HT
1:14
Homogeneity
2:00
Analogy
2:38
5:00
Null Hypothesis
5:01
Alternative Hypothesis
6:11
Example
6:33
Chi-Square Statistic
10:12
Same as Goodness-of-Fit Test
10:13
Set Up Data
12:28
Setting Up Data Example
12:29
Expected Frequency
16:53
Expected Frequency
16:54
Chi-Square Distributions & df
19:26
Chi-Square Distributions & df
19:27
Conditions for Test of Homogeneity
20:54
Condition 1
20:55
Condition 2
21:39
Condition 3
22:05
Condition 4
22:23
Example 1: Chi-Square Test of Homogeneity
22:52
Example 2: Chi-Square Test of Homogeneity
32:10
Section 13: Overview of Statistics
Overview of Statistics

18m 11s

Intro
0:00
0:07
0:08
The Statistical Tests (HT) We've Covered
0:28
The Statistical Tests (HT) We've Covered
0:29
Organizing the Tests We've Covered…
1:08
One Sample: Continuous DV and Categorical DV
1:09
Two Samples: Continuous DV and Categorical DV
5:41
More Than Two Samples: Continuous DV and Categorical DV
8:21
The Following Data: OK Cupid
10:10
The Following Data: OK Cupid
10:11
Example 1: Weird-MySpace-Angle Profile Photo
10:38
Example 2: Geniuses
12:30
Example 3: Promiscuous iPhone Users
13:37
Example 4: Women, Aging, and Messaging
16:07

• ## Related Books

 0 answersPost by Terry Kim on October 20, 2015In example 1, last part isn't standard deviation = sigma?and variance = sigma squared. Since standard deviation = sigma is given shouldn't we use z? not t? 2 answersLast reply by: Terry KimTue Oct 20, 2015 12:22 AMPost by Kedrick Mckissock on March 14, 2015How do you know when a question is talking about the real SD and not the estimated population SD 0 answersPost by Brijesh Bolar on August 20, 2012degrees of freedom well explained..

### t Distributions

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

• Intro 0:00
• When to Use z vs. t? 1:07
• When to Use z vs. t?
• What is z and t? 3:02
• z-score and t-score: Commonality
• z-score and t-score: Formulas
• z-score and t-score: Difference
• Why not z? (Why t?) 7:24
• Why not z? (Why t?)
• But Don't Worry! 15:13
• Gossett and t-distributions
• Rules of t Distributions 17:05
• t-distributions are More Normal as n Gets Bigger
• t-distributions are a Family of Distributions
• Degrees of Freedom (df) 20:02
• Degrees of Freedom (df)
• t Family of Distributions 24:07
• t Family of Distributions : df = 2 , 4, and 60
• df = 60
• df = 2
• How to Find It? 31:01
• 'Student's t-distribution' or 't-distribution'
• Excel Example
• Example 1: Which Distribution Do You Use? Z or t? 45:26
• Example 2: Friends on Facebook 47:41
• Example 3: t Distributions 52:15
• Example 4: t Distributions , confidence interval, and mean 55:59

### Transcription: t Distributions

Hi and welcome to www.educator.com.0000

Today we are going to talk about t-distribution.0001

Previously, we learn that there are different situations where we use z and when you use t.0004

Today we are going to talk about when to use z versus t.0011

We are going to break down and sort of reflect and recognize what is z and t?0015

What do they have in common and with is different about them?0022

For certain cases we are going to ask question, why not z why t instead?0024

What does not z have?0031

We will talk about rules of t distribution, they follow certain patterns and t distributions0035

are a family of distributions separated by degrees of freedom.0044

Different t distributions have different degrees of freedom.0049

We are going to talk about what are degrees of freedom?0053

We are going to talk about how degrees of freedom relates to that family of t distribution, and then finally summarize how to find t.0056

First off, when do we use z versus t?0065

We covered in the previous sections where we look at whether we knew the population parameters or not.0072

In hypothesis testing, we frequently do not know the mu of the population, but sometimes we are given sigma for some reason or another.0080

In this case we use z in order to figure out how many standard errors away from the mean we are in our SDOM.0091

But in other situations, we do not know what sigma is.0102

In that case we use t in order to figure out how many standard errors away our x bar is from our mu.0107

Just to draw that picture for you remember we are interested in the SDOM because the SDOM tends to be normal given certain conditions.0118

Although mu sub x bar = mu given the CLT what we often want to know is if we have x bar that fall here or x bar that falls here.0126

We want to know how far away it is from the mu sub x bar.0147

In order to find that we would not just use the raw score and get the raw distance but we would want that distance in terms of standard deviation.0153

But because this is the SDOM, we call it the standard error.0165

We would either want a z or t and these numbers tell us how many standard errors away we are from this point right in the mu.0168

What is the z and t?0181

The commonality as we saw before is it tells us number of standard error away from mu sub x bar and that is common to both.0186

That is what the z score and t score both have in common.0208

Because of that their formula looked very much the same.0213

For instance, one way we can write the z formula is like this.0217

We have x bar - mu or mu sub x bar they are the same and this gives us the distance in terms of just the raw values.0231

Just how many whatever inches away, points away, whatever it is.0251

Whatever your raw score means, degrees away divided by standard error.0258

If we double-click on that standard error and look at what is inside than the standard error also written as sigma sub x bar0264

because it is the standard deviation of a whole bunch of mean = sigma ÷ √n.0275

If we look at the t score formula then we have almost the same formula.0284

We have that distance ÷ how big your little steps are, how big your standard deviations are.0294

But when we double-click on the standard error like something on the desktop, you double-click it and open it up what is inside?0302

Well, you could also write this one as s sub x bar and that would be s ÷ √n.0311

Here in lies this difference right there.0323

That is our difference.0325

Here the difference is that standard error found using the sigma, the true population standard deviation.0327

Obviously if you use the real deal that is better or more accurate than the standard error found using estimate population standard deviation.0345

That is s.0371

S is estimated from the sample and if we double clicked on s it would look like this.0375

It is that basic idea of all the squared deviations away from x bar, away from the mean of the sample.0383

X sub i - x bar2.0395

We have all the squared deviations and we add them up ÷ n -1 because this is our estimate of the population standard deviations0402

and all of that under the square root sign in order to just leave us a standard deviation rather than variance.0414

This is an estimate of population standard deviation.0421

It is not the real deal, so it is not as accurate.0426

One thing you should know is that the z score is less variable and the t score is going to be more variable.0430

That is going to come in to bear on why we use which one.0438

Okay, so why not z?0443

When we have situations where we do not have the population standard deviation, why not z?0448

Why cannot just be like you are using s, why cannot we do that?0458

Why do we use t?0466

It is because we use s this is actually something a little bit weird.0468

The weirdness comes from the fact that this s is much more variable than sigma.0474

Sometimes when we get our estimate, our estimate is scat on.0481

Sometimes when we get our estimate it is off.0485

That is what we mean when it is more variable.0489

It is not going to hit the nail and head everything single time.0491

It is going to vary in its accuracy.0495

Now z scores are normally distributed when SDOM is normal.0497

Here is what this means.0502

The way you can think about it is like this, when the SDOM is normal and we pick a bunch of points out0503

and find the standard error from those points and plot those, we will get another normal distribution.0516

But that is not necessarily the case for s.0523

Here we need to know that z scores are nice because z scores is going to perfectly cut off that normal distribution accurately for you.0530

Remember, the normal distribution it always has that probability underneath the pro and it has these little marks.0547

These can be set in terms of z scores.0557

What is nice about the SDOM when it is normal is that when we have the z score it will perfectly match to the proportion of the curve that it covers.0563

This will always match.0579

The problem is t scores do not match up in this way.0581

We can just say why do we just call a t score a z score and still use the same areas underneath the curve?0587

We cannot do that because that is just the superficial change.0600

Here is what we mean by the z scores are normally distributed.0603

When you get z scores and when we talk about normal distribution, I'm not just talking about that bell shaped curve.0611

Yes overall it should have that bell shaped general shape but it is a little more specific than that.0619

You can have the bell shaped and have the perfect normal distribution.0628

For instance, 1 standard deviation away this is area will give you 34% of the area underneath the curve.0635

That is a true normal distribution.0653

This on the other hand, it looks on the surface as if it is normally distributed.0656

It looks like that bell shaped curve, but it is not.0662

Here is why.0665

This area, I should have actually drawn it a little bit differently, but I want to show you that do not go by appearances.0666

Appearances can be deceiving.0677

This might actually be a little bit less than 34%.0678

It might be something like 25%.0685

If that was the case, you would see this area and that area is not 34%.0688

It is 25%.0700

Not only that but this area is now a little bit more than 13 ½, it is around 14%.0701

Now this area is not 2% but 11%.0710

Although it looks like a bell shaped curve, it is not quite a normal distribution because0715

it does not follow that empirical rule that we have talked about before.0722

What is nice about z scores is that z scores will always fall in this pattern.0726

These z scores will always correspond to these numbers.0731

That is why you could always use that z table in the back and rely on it.0735

The t scores are not going to do that for you.0739

T scores may not give you that perfect 34, 13 ½ and 2% sort of distribution.0746

Even though the SDOM might be normal, the t scores are not necessarily normal.0753

We had this normal thing and we have t scores and how do we go from t score's defining this area underneath the curve.0762

That is the problem we have here.0772

It turns out that if n is big then this does not matter as much.0774

It n is really large, if your sample size is large then the t distribution approximates normal.0782

It goes towards normal but when n is small, then you have to worry.0788

Also when n is in the middle or when n is big, it is just large.0795

There are all these situations where you have to worry about the t as well as the area underneath the curve.0801

If the t scores are not normally distributed then we cannot calculate the area underneath the curve.0810

If we have our lovely SDOM and we know that the SDOM is nice and normal and we have our mu sub x bar here then everything is fine and dandy.0816

We have x bar here and we want to find that distance, and we find the t score.0832

The problem is we cannot translate from this directly into this area.0838

That is the problem we ran into.0844

Here what we see is this sort of more like a t distribution than a z distribution.0847

I'm just going to call the z distributions to call them basically, the normal distribution.0865

The t distribution is often a little bit smooched.0871

Think of having that perfect normal bell shape.0876

It is squishing the top of it down.0880

It makes that shape ball out a little bit.0882

It is not as sharply peaked but a little bit more variable.0888

We had said the s is more variable than the sigma.0895

It makes sense that the t comes from s is more variable than the z that comes from sigma.0902

You might be thinking what are we stuck?0911

We are not stuck and here is why.0921

He actually worked out all the t distributions as well.0924

He manually calculated a lot of the t distributions and made tables of the t distributions that we still use today.0928

He published those tables and under the pseudonym the student.0944

At the time he was working for Guinness brewery and he could not publish because they were sort of like we do not want you to know who we are.0949

Our secrets are very dark beer.0957

He published under the pseudonym and because of that some of your t distributions0960

in the back of your book maybe labeled the students t to talk about Bosset’s t.0967

Here is what Bosset found, he found that t distribution can be reliable to.0973

You can know about them it is just that you need more information when you need for the z distribution.0980

For z distribution you do not need to know anything.0988

You just need to know z and it will give you the probability.0990

Life is simple.0993

T distributions are not that simple, but not that complicated either.0995

They had a few more conditions to satisfy and the biggest condition that you will have to know is about degrees of freedom.1002

Because for each degree of freedom there is a slightly different t distribution that goes along with it.1012

Let us talk about some of the rules that govern t distributions.1024

The first one you already know as t distribution gets more normal as n gets bigger.1031

This makes sense if we step back and think about it for a second.1039

Imagine if n=n then what would your s be?1042

If your sample is like a entire population then s should be much closer to the actual1054

population standard deviation much better than when n is small.1071

It is still a little off because of the n-1 thing but it is very close and that is the closest you can get.1077

T distributions are more normalized and gets bigger because s is a better estimate of sigma as n gets bigger.1085

That makes sense.1111

The problem all stems from s.1113

It is variability that as s gets better, less variable and more accurate to the population then t gets better.1116

T is based on s.1128

That is like t distributions are more normalized as n gets bigger.1130

T distributions are a family of distribution.1135

It is not just one distribution.1138

It is a whole bunch of them that are alike in some way and it depends on n.1140

It depends technically on degrees of freedom, but you can say it depend on n sometimes because degrees of freedom is often n -1.1145

There are other kinds of degrees of freedom this is the one you need to know for now.1154

But later on we will distinguish between different kinds of degrees of freedom.1159

Degrees of freedom is actually important as a general idea it is just the number of data points -1.1163

We have a family of distributions.1174

They all look sort of a like.1178

They are all symmetrical and they are unimodal and they have that bell like shape, but they're not quite normal.1179

Not all of them.1190

As n gets bigger, or as degrees of freedom gets bigger the distribution becomes more and more normal.1191

Let us step back and talk a little bit about degrees of freedom first.1201

Let us assume there are three subjects in one samples so n=3.1207

We know that just by the blind formula n -1 degrees of freedom is 2 but what does this mean?1213

Here is the thing.1224

Let us assume there are three subjects in one sample and let us say it is some score on a statistics test.1228

They can score from 0 to 100 and if I say pick any 3 scores you want and that could be those subject scores.1235

Your degrees of freedom would be 3.1244

You are free to choose any 3 scores.1246

You are not limited.1249

You are not restricted in any way.1250

If you figure out any sample statistic, let us say the mean or variance.1253

If you figure out any sample statistic then if you randomly picked 2 of those scores you can no longer just pick the 3rd score freely.1261

You have to pick a particular score because you already used up some of your populations for the mean.1274

The mean will constrain which two scores you could pick.1283

This logic will become more important later.1288

Let us put some numbers in here.1292

Let us talk about the case when n= 3 and degrees of freedom = 3.1294

It would be like there are three subjects and they could score from 0 to 100.1299

I am totally free.1310

I can pick 87, 52, my last score I can pick anything I want.1314

I can pick 52 again, 100, or 0.1321

It does not matter.1325

I can just pick any score I want.1326

If I erase these other scores I will just put in a different score.1328

It does not matter.1333

I'm very free to vary.1335

But let us talk about the most situations that we have in statistics where we figure out summary statistics.1337

Here we have n=3 and degrees of freedom =2.1345

Here is why.1350

The score is the same, it can go from 0 to 100.1351

We also found the x bar =50.1358

If we found that the x bar = 50, then we cannot just take any score all 3 times.1363

Can we pick any score for the first one?1374

Yes I can pick 0.1377

Can I pick any score for the 2nd one?1379

Sure, I can pick 100.1383

Now that third score I cannot take any score.1386

If I pick 72 my mean would not be 50.1392

If I pick 42 my mean would not be 50.1394

If I pick another 0, my mean would not be 50.1399

That is the problem and because of that if this is my data set so far I have been free to vary.1403

I freely chose this guy but this last one I am locked in.1410

I have to choose 50.1415

That is the only way I can get a mean of 50.1417

That is what we call degrees of freedom.1420

This logic is going to become more important later on, but for now what you can think about is1423

because we are deriving other summary statistics from our sample we are not completely free to vary.1429

We locked ourselves down.1437

We pinned ourselves down and built little gates for us at the borders.1439

Now you know degrees of freedom and we know as degrees of freedom or n goes up we see more and more normal like distributions.1445

I have drawn three distributions here for you.1460

Here you might notice that I have used basically the same picture of a curve for all three of these.1462

You might think they have all the same distribution.1469

Not true because you have to take a look at the way that I have shown you that t down here.1473

The way that I have labeled this x axis or t axis in this case is really to change our interpretation of these curves.1482

Remember what the normal distribution says.1493

The normal distribution says 1 standard deviation to the right or positive side, 1 standard deviation1496

to the negative side that area should be about 68% of your entire curve.1502

Is it true here?1507

No it is not, this does not look like more than 50% of the curve.1510

This looks like maybe 1/3.1521

Maybe a little less than 1/3.1526

This is starting to look more like 60% of the curve, but still maybe not quite 68% of the curve.1528

It is still only looks like may be 50% of the curve or a little more.1539

Imagine that this was shifted in the middle this would be more like 68% of the curves.1544

Something like this would be more like 60% of the curve.1560

That is how you can see that as your degrees of freedom increases it becomes more and more normal.1567

Even this is not quite normal.1582

This is not quite 68% but a little bit less actually.1585

As the DF gets bigger and bigger that area starts to look more and more like the normal distribution.1588

Now there is another way I can draw these pictures and I believe in this other way you can see more easily how helped this is more variable version.1598

Remember I am saying that t distribution is like you are stomping down on the peak of it and smooching it out a little bit.1615

I believe that if I draw the same picture in a slightly different way you will see why.1624

In this case, here is what I have done.1630

I have kept the t axis the same and now it is labeled in the same way, but I have drawn these distributions in a slightly different way.1634

Now this one is a little wider and this one is less wide and this one is even less wide.1647

It becomes more narrow, more like the normal distribution.1656

Notice that if I drew the line here, a little bit after 1 standard deviation away we see they are a little of that curve on the side.1661

You know if that is 50% and maybe 15%, 10%, something like that.1675

This might look more roughly equivalent to this, maybe a little bit less.1685

Maybe like 20%.1693

This looks like much more than this.1695

Maybe this is like 25 or 30% even compared to this.1700

In that way you can see using the same concepts the drawing and picture in a slightly different way that this distribution is much more variable.1706

It is spread is very wide.1719

Where is this distribution is much less variable?1721

Remember t is all because of the variability found in s.1725

When s is very, very variable and n is very small, s is very variable, so the t distribution is also quite variable.1731

As s n gets bigger, s gets more and more accurate, more like the actual standard deviation of the population.1741

And because of that, it becomes more and more normal.1752

Let us break this one down.1755

In degrees of freedom of 60, here is what it might look like.1761

It might look something that is very close to our 34, 13 ½ , 2% normal distribution.1769

If we drew our little lines there, that would probably look very close to this picture.1777

It looks pretty close.1792

When we draw something like this, this area might only be 25% of this whole curve.1797

This other areas also combined 25%.1810

If I split this like this, then this would be something like 14%.1817

A little bit less than this but still quite a bit.1826

This one might even be more than 14%, maybe like 18%.1832

As you can see that in this distribution even though I have drawn it like this and just labeled it differently.1840

In reality, it will look more like this if you kept this t axis to be constant.1849

It will look sort of smooched out.1855

How do you find t at the end of the day?1859

How do you find the t and not only that how do you find the probability associated with that t?1867

For instance, where t is greater than 2?1874

How do you find these probabilities?1878

We know how to do it for z but how do you do it for t?1881

One thing that you could do is you can look at the back of your book usually in the appendix section1884

there is something called the t distribution or the students t distributions that you can look at.1892

Oftentimes it will have degrees of freedom on one side like 2, 3, 4, 5 all the way down and then it will show you either one tailed or two tailed area.1898

It might give you .25, .10 and .05, .025.1914

It might give you these areas.1926

The number right here tells you the t score at that place.1929

If you wanted to know where the 25% cut off is, what this t score is for degrees of freedom = 2 distribution and you would look right here.1935

If you wanted to know it for .025 then you would look here.1962

You want to look for degrees of freedom, as well as how much of the curve you're trying to cover.1975

That is definitely one way to do it.1984

The other way you could do it is by using Excel and just like how Excel will help you find probabilities1988

and z scores for the standardized normal distribution you can also find it in Excel for the t distribution.1995

It needs a couple of hints.2003

Let us start off with tdist.2006

Tdist is the case where you want to find the probability but you have everything else.2012

What the tdist will do is if you put in the degrees of freedom and you put in the actual x value.2019

You can think of the x value as the t value and it will only take positive t values.2033

For instance, a t value of 1 and the number of tails if you want this entire area or you just want that area alone.2039

You can either put in one or two then it will give you the probability of this area.2058

I can show you right here.2066

Let us put in tdist for t(1) and degrees of freedom 2 and let us look at what it might say for two tails.2070

It will say 42% and if you look at this exact same thing, but if you look at it for one tail it will just divide this area in half.2098

21% and 42% makes sense.2111

Basically this is giving you this area + this area if you want 2 tails.2116

But if you only want one tail it will just give you this area.2122

We know that for 95% competence interval we often use z score of 1.96 and that will give us a tail of .025 or if we count two tails 25%.2125

Let us see what this gives for 1.96 when we have a degrees of freedom of only 2.2149

Let us put in 1.96.2158

If we put that in our z score, if we put in 2 tails we would only get 5%, but let us see what we get here.2163

Degrees of freedom 2 and number of tails let us put in 2.2173

Do you think this should be more or less than 5%?2179

The t distribution is like slightly smooched, it is more spread out and because of that it is going to have this longer tail.2186

It is not going to be nice and all compact in the middle.2196

We would imagine that it have a fat tail.2202

I would say more than 5%.2204

We see that it is almost 20% a t of 1.96.2207

Let us put that same z score in.2216

Normsdist this is whenever we want the probability and put a 1.96.2218

Here we get the negative side, so we want 1 - and this gives us just 1 tails.2227

I am going to change this to 1 tail, so we could look at it.2241

Here on one of our tails, one side of it, it is almost 9 1/2% is still out there.2245

But when we use the z score only 2 1/2% are still out there.2253

Let us look at the same t distribution for a very high degrees of freedom.2258

Let us try 60.2271

Even with something like 60 we are starting to get very close to the z distribution, but still this guy is more variable than the z distribution.2273

Let us see if we could go even higher.2287

Instead of 60 I am going to put in 120.2289

Notice we are getting closer but still these are more variable than these.2294

Let us go a little less.2303

Let us go like 1000 and see what happens there.2304

We are getting close but still slightly more variable.2309

That is a good principle for us to know.2316

The t distribution although it approximates normal, it approximates it from one side.2318

Here is the normal distribution standards .02499.2324

There it is and it is getting closer and closer to it, but it is approaching it from the high-end.2329

These numbers are dropping and getting really close to that, but not quite hitting it.2336

Now you know how to get the probabilities but what if you have the probably and you want to find the t score?2345

What would you do?2355

In this case, you would use the inverse t in for inverse.2356

Here you would put in the two tailed probability.2362

Let us say we want to know what is the t boundary for if we wanted only 5% in our tails?2366

Here is the situation I am talking about for this one.2374

We had this distribution and we know we want these to be .025, just like a z distribution.2378

We want it to .025 but we want to know what these numbers are here.2393

We want to know what these numbers are.2398

It depends on your degrees of freedom.2403

Let us try degrees of freedom of 2, 60, 120, and 1000.2405

Let me label this.2413

Here we get the probabilities from t dist and here are the probabilities from standardized normal distribution, or the z distribution.2424

We do not want the probabilities we actually want the t boundaries themselves and the z boundaries themselves.2443

If we want the z boundary at .025 or at 5%, we would use normsin and we put in our probability.2460

I forget if it is one tailed or two tailed.2472

Let us try one tailed but we would need two tails.2474

We get very close to -1.96.2477

We just have to memorize that but that is why this is saying at -1.96 you have about 2 1/2% in that little tail.2489

In Excel it is inconsistent because z it gives it to you on the negative side, for the t it only gets 2 for the positive side.2504

That is confusing but I often do not memorize that.2512

I just try out a couple of things until it spits out the thing I'm looking for.2515

You have to understand how these things work so that you could predict what's going on.2520

We will use t inverse and we want to know the probability and I believe this is going to be two-tailed.2527

.05 and degrees of freedom of 2.2538

We get .05 and degrees of freedom just to test whether this is one tailed or two tailed.2543

Let me put that in.2563

I believe you have to give it two tails.2565

You have to put in the two tails probability here so that is .05 and the degrees of freedom 2 and this will give us these boundaries.2570

This will only give us the positive boundary, but because it is symmetrical, you automatically know the other side.2580

This would give us a boundary of 4.3.2589

Remember for the z score this boundary will be 1.96 but for a t distribution with the degree of freedom of 2, this would be 4.3.2593

That is quite high because remember it is really spread out.2604

You got to go way out far in order to get just that 2%.2609

What do we get then?2621

We get something very close to 1.96 but it is a little bigger than 1.96.2624

Remember because the t-distribution is more variable you to go farther out there in order to capture just that small amount of .025%.2630

That mean 2.5% or .025.2641

If we go to 120 we should expect is that boundary to come closer and closer to 1.96 from the big side, but not quite hit 1.96 or more closely 1.9599.2646

We are getting close to that 1.96 number, but still it is a little bit higher.2671

Finally we will go buck wild and put in degrees of freedom of 1000 we get something very close to 1.96 but still little the higher than 1.96.2678

Those are two different ways that you can find the t, as well as the probability that t is associated with.2692

Remember the degrees of freedom and you have to know whether you want two tailed probability or one tailed probability.2701

As well as your degrees of freedom.2714

That is what you will have to know in order to look things up on a t distribution.2717

Let us go on to some examples.2722

In each of these situations which distribution do you use, the z or the t?2729

Is there a 500 million people on Facebook how many people have fewer friends than Diana, who has 490 friends?2734

Assume that the number of friends on Facebook is normally distributed and here they give you the sigma.2742

We know that you can use the z distribution here.2749

Here the researchers want to compare a given sample of Facebook users average number of friends 25 to the entire population.2753

What proportion of sample means will be equal or greater than the mean of this group?2763

N = 25, but the mean is 580.2772

They have an average of 580 friends.2779

Here I definitely would not necessarily use z but I also do not have the standard deviation.2783

Maybe this is connected to the previous problem.2796

If so, if I assume that they come from the whole population and they give us the information for the whole population here.2800

If sigma = 100 then I will use z.2811

This one I probably left out some information.2816

Researchers want to know the 95% competence interval for tagged photos given that a sample of 32 people2822

have an average of 185 tagged photos and a standard deviation of 112.2829

Here it is very clear, since I know s but I do not know the sigma for tagged photos.2835

I only know the sigma for friends, but not for tagged photos.2844

In this case, what I would do is use the t distribution because I will probably have to estimate2848

the population standard deviation from the sample standard deviation.2855

Example 2, what we get is that problem and we just have to solve it.2860

There are 500 million people on Facebook but how many people have fewer friends than Diana?2869

Here it is good to know that we do not need a sampling distribution of the mean.2874

We do not need the SDOM.2880

In fact, we are just using the population and Diana.2882

We could draw the population and it tells us that the population is normally distributed.2886

Number of friends is normally distributed and so the mu = 600 and a standard deviation is 100.2895

This little space is 100 so this would be 700.2914

Diana has 490 friends so here would be 500.2920

It is asking how many people have fewer friends than Diana?2929

How many have that?2937

It is tricky because this will give us the proportion but it would not give us how many people?2940

What we will have to do it multiply that proportion to the 500 million.2950

This is all 500,000,000 and that is 100%.2956

We will need to know some proportion of them that have friends fewer than Diana, fewer than 490.2961

We will have to figure that out and so we will have to multiply 500 million by the percentage.2974

Let us get cracking.2981

We can figure out the z score for Diana and that would be 490 - 600 and ÷ 100.2984

I only need to do standard error if I was using the SDOM but I am using the population standard deviation.3010

That is often helpful to draw this.3016

Here we have about 100 ÷ 100 = -1.1.3018

The z score of -1.1 and I want to know the proportion of people who have friends less than Diana.3031

You can look this up on the back of your book, so I would just look up the z score of -1.1 or you could put it into Excel normsdist -1.1.3046

I should get about .1357 so that would be .1357.3065

That is about 13 ½ % of the population have fewer friends than Diana.3081

What I want to do is only get 13% of these entire populations and that would be 500 million × .1357.3089

You can do this on a calculator, so that × 500 million = 67.83 million.3103

Do not forget to put the million part.3117

It is not that you only have 67 people who have fewer friends than Diana.3124

That would be our answer right there.3129

The researchers want to compare a given sample of Facebook users average number of friends a sample of 25 to the whole population.3132

What proportion of sample means will be equal or greater than the mean of this group?3146

Here I'm going to assume because there is no other way to this problem.3159

I am going to assume that we could use the information from example 2 because we are talking about the same thing, the number of friends.3165

We actually know the population.3173

The population is approximately normally distributed with the mu of 600 and standard deviation of 100.3176

Mu= 600, standard deviation=100 and from this I need to generate an SDOM because3195

now we are talking about samples of people not just one person at a time.3205

Because of that I need to generate SDOM for n = 25.3211

The nice thing is we already know the mu sub x bar = mu that is 600 but we actually also know3216

the standard error because standard error is standard deviation ÷√n.3234

In this case, it is 100 ÷ √25 =20.3240

1 standard error away here is 20.3246

This would be 580, 560, and so forth.3255

It is asking what proportion of sample means will be equal to or greater than the mean of this group?3262

Equal to or greater than means all of these and they are just asking for proportions we do not have to do anything to it once we get the answer.3271

Well, it might be nice if we could actually get the z score for this SDOM.3281

Here, instead of just putting 580 I would want to find the z score here.3290

Here are friends but I want to know it in terms of z score.3296

It is actually really easy because it is the z score of -1 and we can actually just use the empirical rule to find this out because we know at the mean,3303

at the expected value we know that this is 50% and this is 34%.3318

If we add that together, the proportion of sample means greater than or equal to the mean3327

of this group that = the proportion where z score is greater than or equal to -1 and that is .84%.3341

Final example, researchers want to know the 95% competence interval for tagged photos given that3357

a sample of 32 people have an average of 185 tagged photos and a standard deviation of 112.3366

Interpret what the CI means.3375

Here we do not know anything about the population, but we do know x bar which is 1853377

and we do know the standard deviation of the sample s which is 112.3386

We also know n is 32.3393

Remember when we talk about competence interval we want to go from the sample to figure out where the population mean be.3396

What we do is we assume that we are going to pretend SDOM here and we assume that the3408

x bar is going to equal the expected value of this SDOM which is 185.3420

From there we could actually estimate the standard error by using s.3428

Here mu sub x bar = 185 this is assumed not sigma but x sub x bar is s ÷ √n=112 ÷ √32.3438

If you pull up a calculator you could just calculate that out 112 ÷ √32 and get 19.8.3460

We know how far the jumps are and because we used s we cannot just find the z score we have to find t score.3477

We will have to use t score in order to create a 95% competence interval.3496

Although I do not know what the t distribution for the degrees of freedom of 32 – 1.3504

I do not know degrees of freedom of 31 t distributions looks like.3515

We will have to figure that out.3520

What we eventually want is this to be .025.3524

These are together a combined two tailed probability of 5% and we will have to use t inverse because we already know the probability.3531

We want to go backwards to find the t.3544

T inverse and we put in our two-tailed probability .05 and put in our degrees of freedom, which in this case is 31.3548

We ask what is the t and it says it is 2.04.3559

The t right here at these borders is 2.04 and because it is symmetrical we also know that this one is -2.04.3564

In order to find the competence interval we are really looking for these raw values right here.3577

In order to get that we add the middle point and add 2.04 standard errors to get out here and we subtract out 2.04 standard errors to get out here.3587

The competence interval will be x bar + or - the t score.3605

How many jumps multiplied by how big these jumps actually are and that is the score right here multiplied by s(x).3615

If we actually put in our numbers that is going to be 185 + or -2.04 × 19.8.3627

If you just pull out a calculator we could get 185.3638

Make sure to put that = 185 even I forget sometimes +2.04 × 19.8 and remember Excel knows order of operations.3643

It will do the multiplication part before it does the addition part,3660

The upper limit will be 225.39 and the lower limit will be 144.61.3664

I just rounded to the nearest tenth and this would be 225.4 and this would be 144.6.3683

We need to interpret what the CI means.3697

This means that there is a 95% chance that the population mean will fall in between 144.6 and 225.4 that is the interval.3705

That is it for t-distributions.3721

Thank you for using www.educator.com.3724

OR

### Start Learning Now

Our free lessons will get you started (Adobe Flash® required).