Enter your Sign on user name and password.

Forgot password?
Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Use Chrome browser to play professor video
Dr. Ji Son

Dr. Ji Son

Expected Value & Variance of Probability Distributions

Slide Duration:

Table of Contents

I. Introduction
Descriptive Statistics vs. Inferential Statistics

25m 31s

Intro
0:00
Roadmap
0:10
Roadmap
0:11
Statistics
0:35
Statistics
0:36
Let's Think About High School Science
1:12
Measurement and Find Patterns (Mathematical Formula)
1:13
Statistics = Math of Distributions
4:58
Distributions
4:59
Problematic… but also GREAT
5:58
Statistics
7:33
How is It Different from Other Specializations in Mathematics?
7:34
Statistics is Fundamental in Natural and Social Sciences
7:53
Two Skills of Statistics
8:20
Description (Exploration)
8:21
Inference
9:13
Descriptive Statistics vs. Inferential Statistics: Apply to Distributions
9:58
Descriptive Statistics
9:59
Inferential Statistics
11:05
Populations vs. Samples
12:19
Populations vs. Samples: Is it the Truth?
12:20
Populations vs. Samples: Pros & Cons
13:36
Populations vs. Samples: Descriptive Values
16:12
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:10
Putting Together Descriptive/Inferential Stats & Populations/Samples
17:11
Example 1: Descriptive Statistics vs. Inferential Statistics
19:09
Example 2: Descriptive Statistics vs. Inferential Statistics
20:47
Example 3: Sample, Parameter, Population, and Statistic
21:40
Example 4: Sample, Parameter, Population, and Statistic
23:28
II. About Samples: Cases, Variables, Measurements
About Samples: Cases, Variables, Measurements

32m 14s

Intro
0:00
Data
0:09
Data, Cases, Variables, and Values
0:10
Rows, Columns, and Cells
2:03
Example: Aircrafts
3:52
How Do We Get Data?
5:38
Research: Question and Hypothesis
5:39
Research Design
7:11
Measurement
7:29
Research Analysis
8:33
Research Conclusion
9:30
Types of Variables
10:03
Discrete Variables
10:04
Continuous Variables
12:07
Types of Measurements
14:17
Types of Measurements
14:18
Types of Measurements (Scales)
17:22
Nominal
17:23
Ordinal
19:11
Interval
21:33
Ratio
24:24
Example 1: Cases, Variables, Measurements
25:20
Example 2: Which Scale of Measurement is Used?
26:55
Example 3: What Kind of a Scale of Measurement is This?
27:26
Example 4: Discrete vs. Continuous Variables.
30:31
III. Visualizing Distributions
Introduction to Excel

8m 9s

Intro
0:00
Before Visualizing Distribution
0:10
Excel
0:11
Excel: Organization
0:45
Workbook
0:46
Column x Rows
1:50
Tools: Menu Bar, Standard Toolbar, and Formula Bar
3:00
Excel + Data
6:07
Exce and Data
6:08
Frequency Distributions in Excel

39m 10s

Intro
0:00
Roadmap
0:08
Data in Excel and Frequency Distributions
0:09
Raw Data to Frequency Tables
0:42
Raw Data to Frequency Tables
0:43
Frequency Tables: Using Formulas and Pivot Tables
1:28
Example 1: Number of Births
7:17
Example 2: Age Distribution
20:41
Example 3: Height Distribution
27:45
Example 4: Height Distribution of Males
32:19
Frequency Distributions and Features

25m 29s

Intro
0:00
Roadmap
0:10
Data in Excel, Frequency Distributions, and Features of Frequency Distributions
0:11
Example #1
1:35
Uniform
1:36
Example #2
2:58
Unimodal, Skewed Right, and Asymmetric
2:59
Example #3
6:29
Bimodal
6:30
Example #4a
8:29
Symmetric, Unimodal, and Normal
8:30
Point of Inflection and Standard Deviation
11:13
Example #4b
12:43
Normal Distribution
12:44
Summary
13:56
Uniform, Skewed, Bimodal, and Normal
13:57
Sketch Problem 1: Driver's License
17:34
Sketch Problem 2: Life Expectancy
20:01
Sketch Problem 3: Telephone Numbers
22:01
Sketch Problem 4: Length of Time Used to Complete a Final Exam
23:43
Dotplots and Histograms in Excel

42m 42s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Previously
1:02
Data, Frequency Table, and visualization
1:03
Dotplots
1:22
Dotplots Excel Example
1:23
Dotplots: Pros and Cons
7:22
Pros and Cons of Dotplots
7:23
Dotplots Excel Example Cont.
9:07
Histograms
12:47
Histograms Overview
12:48
Example of Histograms
15:29
Histograms: Pros and Cons
31:39
Pros
31:40
Cons
32:31
Frequency vs. Relative Frequency
32:53
Frequency
32:54
Relative Frequency
33:36
Example 1: Dotplots vs. Histograms
34:36
Example 2: Age of Pennies Dotplot
36:21
Example 3: Histogram of Mammal Speeds
38:27
Example 4: Histogram of Life Expectancy
40:30
Stemplots

12m 23s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
What Sets Stemplots Apart?
0:46
Data Sets, Dotplots, Histograms, and Stemplots
0:47
Example 1: What Do Stemplots Look Like?
1:58
Example 2: Back-to-Back Stemplots
5:00
Example 3: Quiz Grade Stemplot
7:46
Example 4: Quiz Grade & Afterschool Tutoring Stemplot
9:56
Bar Graphs

22m 49s

Intro
0:00
Roadmap
0:05
Roadmap
0:08
Review of Frequency Distributions
0:44
Y-axis and X-axis
0:45
Types of Frequency Visualizations Covered so Far
2:16
Introduction to Bar Graphs
4:07
Example 1: Bar Graph
5:32
Example 1: Bar Graph
5:33
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:07
Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs?
11:08
Example 2: Create a Frequency Visualization for Gender
14:02
Example 3: Cases, Variables, and Frequency Visualization
16:34
Example 4: What Kind of Graphs are Shown Below?
19:29
IV. Summarizing Distributions
Central Tendency: Mean, Median, Mode

38m 50s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
Central Tendency 1
0:56
Way to Summarize a Distribution of Scores
0:57
Mode
1:32
Median
2:02
Mean
2:36
Central Tendency 2
3:47
Mode
3:48
Median
4:20
Mean
5:25
Summation Symbol
6:11
Summation Symbol
6:12
Population vs. Sample
10:46
Population vs. Sample
10:47
Excel Examples
15:08
Finding Mode, Median, and Mean in Excel
15:09
Median vs. Mean
21:45
Effect of Outliers
21:46
Relationship Between Parameter and Statistic
22:44
Type of Measurements
24:00
Which Distributions to Use With
24:55
Example 1: Mean
25:30
Example 2: Using Summation Symbol
29:50
Example 3: Average Calorie Count
32:50
Example 4: Creating an Example Set
35:46
Variability

42m 40s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Variability (or Spread)
0:45
Variability (or Spread)
0:46
Things to Think About
5:45
Things to Think About
5:46
Range, Quartiles and Interquartile Range
6:37
Range
6:38
Interquartile Range
8:42
Interquartile Range Example
10:58
Interquartile Range Example
10:59
Variance and Standard Deviation
12:27
Deviations
12:28
Sum of Squares
14:35
Variance
16:55
Standard Deviation
17:44
Sum of Squares (SS)
18:34
Sum of Squares (SS)
18:35
Population vs. Sample SD
22:00
Population vs. Sample SD
22:01
Population vs. Sample
23:20
Mean
23:21
SD
23:51
Example 1: Find the Mean and Standard Deviation of the Variable Friends in the Excel File
27:21
Example 2: Find the Mean and Standard Deviation of the Tagged Photos in the Excel File
35:25
Example 3: Sum of Squares
38:58
Example 4: Standard Deviation
41:48
Five Number Summary & Boxplots

57m 15s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Summarizing Distributions
0:37
Shape, Center, and Spread
0:38
5 Number Summary
1:14
Boxplot: Visualizing 5 Number Summary
3:37
Boxplot: Visualizing 5 Number Summary
3:38
Boxplots on Excel
9:01
Using 'Stocks' and Using Stacked Columns
9:02
Boxplots on Excel Example
10:14
When are Boxplots Useful?
32:14
Pros
32:15
Cons
32:59
How to Determine Outlier Status
33:24
Rule of Thumb: Upper Limit
33:25
Rule of Thumb: Lower Limit
34:16
Signal Outliers in an Excel Data File Using Conditional Formatting
34:52
Modified Boxplot
48:38
Modified Boxplot
48:39
Example 1: Percentage Values & Lower and Upper Whisker
49:10
Example 2: Boxplot
50:10
Example 3: Estimating IQR From Boxplot
53:46
Example 4: Boxplot and Missing Whisker
54:35
Shape: Calculating Skewness & Kurtosis

41m 51s

Intro
0:00
Roadmap
0:16
Roadmap
0:17
Skewness Concept
1:09
Skewness Concept
1:10
Calculating Skewness
3:26
Calculating Skewness
3:27
Interpreting Skewness
7:36
Interpreting Skewness
7:37
Excel Example
8:49
Kurtosis Concept
20:29
Kurtosis Concept
20:30
Calculating Kurtosis
24:17
Calculating Kurtosis
24:18
Interpreting Kurtosis
29:01
Leptokurtic
29:35
Mesokurtic
30:10
Platykurtic
31:06
Excel Example
32:04
Example 1: Shape of Distribution
38:28
Example 2: Shape of Distribution
39:29
Example 3: Shape of Distribution
40:14
Example 4: Kurtosis
41:10
Normal Distribution

34m 33s

Intro
0:00
Roadmap
0:13
Roadmap
0:14
What is a Normal Distribution
0:44
The Normal Distribution As a Theoretical Model
0:45
Possible Range of Probabilities
3:05
Possible Range of Probabilities
3:06
What is a Normal Distribution
5:07
Can Be Described By
5:08
Properties
5:49
'Same' Shape: Illusion of Different Shape!
7:35
'Same' Shape: Illusion of Different Shape!
7:36
Types of Problems
13:45
Example: Distribution of SAT Scores
13:46
Shape Analogy
19:48
Shape Analogy
19:49
Example 1: The Standard Normal Distribution and Z-Scores
22:34
Example 2: The Standard Normal Distribution and Z-Scores
25:54
Example 3: Sketching and Normal Distribution
28:55
Example 4: Sketching and Normal Distribution
32:32
Standard Normal Distributions & Z-Scores

41m 44s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
A Family of Distributions
0:28
Infinite Set of Distributions
0:29
Transforming Normal Distributions to 'Standard' Normal Distribution
1:04
Normal Distribution vs. Standard Normal Distribution
2:58
Normal Distribution vs. Standard Normal Distribution
2:59
Z-Score, Raw Score, Mean, & SD
4:08
Z-Score, Raw Score, Mean, & SD
4:09
Weird Z-Scores
9:40
Weird Z-Scores
9:41
Excel
16:45
For Normal Distributions
16:46
For Standard Normal Distributions
19:11
Excel Example
20:24
Types of Problems
25:18
Percentage Problem: P(x)
25:19
Raw Score and Z-Score Problems
26:28
Standard Deviation Problems
27:01
Shape Analogy
27:44
Shape Analogy
27:45
Example 1: Deaths Due to Heart Disease vs. Deaths Due to Cancer
28:24
Example 2: Heights of Male College Students
33:15
Example 3: Mean and Standard Deviation
37:14
Example 4: Finding Percentage of Values in a Standard Normal Distribution
37:49
Normal Distribution: PDF vs. CDF

55m 44s

Intro
0:00
Roadmap
0:15
Roadmap
0:16
Frequency vs. Cumulative Frequency
0:56
Frequency vs. Cumulative Frequency
0:57
Frequency vs. Cumulative Frequency
4:32
Frequency vs. Cumulative Frequency Cont.
4:33
Calculus in Brief
6:21
Derivative-Integral Continuum
6:22
PDF
10:08
PDF for Standard Normal Distribution
10:09
PDF for Normal Distribution
14:32
Integral of PDF = CDF
21:27
Integral of PDF = CDF
21:28
Example 1: Cumulative Frequency Graph
23:31
Example 2: Mean, Standard Deviation, and Probability
24:43
Example 3: Mean and Standard Deviation
35:50
Example 4: Age of Cars
49:32
V. Linear Regression
Scatterplots

47m 19s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
Previous Visualizations
0:30
Frequency Distributions
0:31
Compare & Contrast
2:26
Frequency Distributions Vs. Scatterplots
2:27
Summary Values
4:53
Shape
4:54
Center & Trend
6:41
Spread & Strength
8:22
Univariate & Bivariate
10:25
Example Scatterplot
10:48
Shape, Trend, and Strength
10:49
Positive and Negative Association
14:05
Positive and Negative Association
14:06
Linearity, Strength, and Consistency
18:30
Linearity
18:31
Strength
19:14
Consistency
20:40
Summarizing a Scatterplot
22:58
Summarizing a Scatterplot
22:59
Example 1: Gapminder.org, Income x Life Expectancy
26:32
Example 2: Gapminder.org, Income x Infant Mortality
36:12
Example 3: Trend and Strength of Variables
40:14
Example 4: Trend, Strength and Shape for Scatterplots
43:27
Regression

32m 2s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Linear Equations
0:34
Linear Equations: y = mx + b
0:35
Rough Line
5:16
Rough Line
5:17
Regression - A 'Center' Line
7:41
Reasons for Summarizing with a Regression Line
7:42
Predictor and Response Variable
10:04
Goal of Regression
12:29
Goal of Regression
12:30
Prediction
14:50
Example: Servings of Mile Per Year Shown By Age
14:51
Intrapolation
17:06
Extrapolation
17:58
Error in Prediction
20:34
Prediction Error
20:35
Residual
21:40
Example 1: Residual
23:34
Example 2: Large and Negative Residual
26:30
Example 3: Positive Residual
28:13
Example 4: Interpret Regression Line & Extrapolate
29:40
Least Squares Regression

56m 36s

Intro
0:00
Roadmap
0:13
Roadmap
0:14
Best Fit
0:47
Best Fit
0:48
Sum of Squared Errors (SSE)
1:50
Sum of Squared Errors (SSE)
1:51
Why Squared?
3:38
Why Squared?
3:39
Quantitative Properties of Regression Line
4:51
Quantitative Properties of Regression Line
4:52
So How do we Find Such a Line?
6:49
SSEs of Different Line Equations & Lowest SSE
6:50
Carl Gauss' Method
8:01
How Do We Find Slope (b1)
11:00
How Do We Find Slope (b1)
11:01
Hoe Do We Find Intercept
15:11
Hoe Do We Find Intercept
15:12
Example 1: Which of These Equations Fit the Above Data Best?
17:18
Example 2: Find the Regression Line for These Data Points and Interpret It
26:31
Example 3: Summarize the Scatterplot and Find the Regression Line.
34:31
Example 4: Examine the Mean of Residuals
43:52
Correlation

43m 58s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Summarizing a Scatterplot Quantitatively
0:47
Shape
0:48
Trend
1:11
Strength: Correlation ®
1:45
Correlation Coefficient ( r )
2:30
Correlation Coefficient ( r )
2:31
Trees vs. Forest
11:59
Trees vs. Forest
12:00
Calculating r
15:07
Average Product of z-scores for x and y
15:08
Relationship between Correlation and Slope
21:10
Relationship between Correlation and Slope
21:11
Example 1: Find the Correlation between Grams of Fat and Cost
24:11
Example 2: Relationship between r and b1
30:24
Example 3: Find the Regression Line
33:35
Example 4: Find the Correlation Coefficient for this Set of Data
37:37
Correlation: r vs. r-squared

52m 52s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
R-squared
0:44
What is the Meaning of It? Why Squared?
0:45
Parsing Sum of Squared (Parsing Variability)
2:25
SST = SSR + SSE
2:26
What is SST and SSE?
7:46
What is SST and SSE?
7:47
r-squared
18:33
Coefficient of Determination
18:34
If the Correlation is Strong…
20:25
If the Correlation is Strong…
20:26
If the Correlation is Weak…
22:36
If the Correlation is Weak…
22:37
Example 1: Find r-squared for this Set of Data
23:56
Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance?
33:54
Example 3: Why Does r-squared Only Range from 0 to 1
37:29
Example 4: Find the r-squared for This Set of Data
39:55
Transformations of Data

27m 8s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Why Transform?
0:26
Why Transform?
0:27
Shape-preserving vs. Shape-changing Transformations
5:14
Shape-preserving = Linear Transformations
5:15
Shape-changing Transformations = Non-linear Transformations
6:20
Common Shape-Preserving Transformations
7:08
Common Shape-Preserving Transformations
7:09
Common Shape-Changing Transformations
8:59
Powers
9:00
Logarithms
9:39
Change Just One Variable? Both?
10:38
Log-log Transformations
10:39
Log Transformations
14:38
Example 1: Create, Graph, and Transform the Data Set
15:19
Example 2: Create, Graph, and Transform the Data Set
20:08
Example 3: What Kind of Model would You Choose for this Data?
22:44
Example 4: Transformation of Data
25:46
VI. Collecting Data in an Experiment
Sampling & Bias

54m 44s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Descriptive vs. Inferential Statistics
1:04
Descriptive Statistics: Data Exploration
1:05
Example
2:03
To tackle Generalization…
4:31
Generalization
4:32
Sampling
6:06
'Good' Sample
6:40
Defining Samples and Populations
8:55
Population
8:56
Sample
11:16
Why Use Sampling?
13:09
Why Use Sampling?
13:10
Goal of Sampling: Avoiding Bias
15:04
What is Bias?
15:05
Where does Bias Come from: Sampling Bias
17:53
Where does Bias Come from: Response Bias
18:27
Sampling Bias: Bias from Bas Sampling Methods
19:34
Size Bias
19:35
Voluntary Response Bias
21:13
Convenience Sample
22:22
Judgment Sample
23:58
Inadequate Sample Frame
25:40
Response Bias: Bias from 'Bad' Data Collection Methods
28:00
Nonresponse Bias
29:31
Questionnaire Bias
31:10
Incorrect Response or Measurement Bias
37:32
Example 1: What Kind of Biases?
40:29
Example 2: What Biases Might Arise?
44:46
Example 3: What Kind of Biases?
48:34
Example 4: What Kind of Biases?
51:43
Sampling Methods

14m 25s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Biased vs. Unbiased Sampling Methods
0:32
Biased Sampling
0:33
Unbiased Sampling
1:13
Probability Sampling Methods
2:31
Simple Random
2:54
Stratified Random Sampling
4:06
Cluster Sampling
5:24
Two-staged Sampling
6:22
Systematic Sampling
7:25
Example 1: Which Type(s) of Sampling was this?
8:33
Example 2: Describe How to Take a Two-Stage Sample from this Book
10:16
Example 3: Sampling Methods
11:58
Example 4: Cluster Sample Plan
12:48
Research Design

53m 54s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Descriptive vs. Inferential Statistics
0:51
Descriptive Statistics: Data Exploration
0:52
Inferential Statistics
1:02
Variables and Relationships
1:44
Variables
1:45
Relationships
2:49
Not Every Type of Study is an Experiment…
4:16
Category I - Descriptive Study
4:54
Category II - Correlational Study
5:50
Category III - Experimental, Quasi-experimental, Non-experimental
6:33
Category III
7:42
Experimental, Quasi-experimental, and Non-experimental
7:43
Why CAN'T the Other Strategies Determine Causation?
10:18
Third-variable Problem
10:19
Directionality Problem
15:49
What Makes Experiments Special?
17:54
Manipulation
17:55
Control (and Comparison)
21:58
Methods of Control
26:38
Holding Constant
26:39
Matching
29:11
Random Assignment
31:48
Experiment Terminology
34:09
'true' Experiment vs. Study
34:10
Independent Variable (IV)
35:16
Dependent Variable (DV)
35:45
Factors
36:07
Treatment Conditions
36:23
Levels
37:43
Confounds or Extraneous Variables
38:04
Blind
38:38
Blind Experiments
38:39
Double-blind Experiments
39:29
How Categories Relate to Statistics
41:35
Category I - Descriptive Study
41:36
Category II - Correlational Study
42:05
Category III - Experimental, Quasi-experimental, Non-experimental
42:43
Example 1: Research Design
43:50
Example 2: Research Design
47:37
Example 3: Research Design
50:12
Example 4: Research Design
52:00
Between and Within Treatment Variability

41m 31s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Experimental Designs
0:51
Experimental Designs: Manipulation & Control
0:52
Two Types of Variability
2:09
Between Treatment Variability
2:10
Within Treatment Variability
3:31
Updated Goal of Experimental Design
5:47
Updated Goal of Experimental Design
5:48
Example: Drugs and Driving
6:56
Example: Drugs and Driving
6:57
Different Types of Random Assignment
11:27
All Experiments
11:28
Completely Random Design
12:02
Randomized Block Design
13:19
Randomized Block Design
15:48
Matched Pairs Design
15:49
Repeated Measures Design
19:47
Between-subject Variable vs. Within-subject Variable
22:43
Completely Randomized Design
22:44
Repeated Measures Design
25:03
Example 1: Design a Completely Random, Matched Pair, and Repeated Measures Experiment
26:16
Example 2: Block Design
31:41
Example 3: Completely Randomized Designs
35:11
Example 4: Completely Random, Matched Pairs, or Repeated Measures Experiments?
39:01
VII. Review of Probability Axioms
Sample Spaces

37m 52s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
Why is Probability Involved in Statistics
0:48
Probability
0:49
Can People Tell the Difference between Cheap and Gourmet Coffee?
2:08
Taste Test with Coffee Drinkers
3:37
If No One can Actually Taste the Difference
3:38
If Everyone can Actually Taste the Difference
5:36
Creating a Probability Model
7:09
Creating a Probability Model
7:10
D'Alembert vs. Necker
9:41
D'Alembert vs. Necker
9:42
Problem with D'Alembert's Model
13:29
Problem with D'Alembert's Model
13:30
Covering Entire Sample Space
15:08
Fundamental Principle of Counting
15:09
Where Do Probabilities Come From?
22:54
Observed Data, Symmetry, and Subjective Estimates
22:55
Checking whether Model Matches Real World
24:27
Law of Large Numbers
24:28
Example 1: Law of Large Numbers
27:46
Example 2: Possible Outcomes
30:43
Example 3: Brands of Coffee and Taste
33:25
Example 4: How Many Different Treatments are there?
35:33
Addition Rule for Disjoint Events

20m 29s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Disjoint Events
0:41
Disjoint Events
0:42
Meaning of 'or'
2:39
In Regular Life
2:40
In Math/Statistics/Computer Science
3:10
Addition Rule for Disjoin Events
3:55
If A and B are Disjoint: P (A and B)
3:56
If A and B are Disjoint: P (A or B)
5:15
General Addition Rule
5:41
General Addition Rule
5:42
Generalized Addition Rule
8:31
If A and B are not Disjoint: P (A or B)
8:32
Example 1: Which of These are Mutually Exclusive?
10:50
Example 2: What is the Probability that You will Have a Combination of One Heads and Two Tails?
12:57
Example 3: Engagement Party
15:17
Example 4: Home Owner's Insurance
18:30
Conditional Probability

57m 19s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
'or' vs. 'and' vs. Conditional Probability
1:07
'or' vs. 'and' vs. Conditional Probability
1:08
'and' vs. Conditional Probability
5:57
P (M or L)
5:58
P (M and L)
8:41
P (M|L)
11:04
P (L|M)
12:24
Tree Diagram
15:02
Tree Diagram
15:03
Defining Conditional Probability
22:42
Defining Conditional Probability
22:43
Common Contexts for Conditional Probability
30:56
Medical Testing: Positive Predictive Value
30:57
Medical Testing: Sensitivity
33:03
Statistical Tests
34:27
Example 1: Drug and Disease
36:41
Example 2: Marbles and Conditional Probability
40:04
Example 3: Cards and Conditional Probability
45:59
Example 4: Votes and Conditional Probability
50:21
Independent Events

24m 27s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Independent Events & Conditional Probability
0:26
Non-independent Events
0:27
Independent Events
2:00
Non-independent and Independent Events
3:08
Non-independent and Independent Events
3:09
Defining Independent Events
5:52
Defining Independent Events
5:53
Multiplication Rule
7:29
Previously…
7:30
But with Independent Evens
8:53
Example 1: Which of These Pairs of Events are Independent?
11:12
Example 2: Health Insurance and Probability
15:12
Example 3: Independent Events
17:42
Example 4: Independent Events
20:03
VIII. Probability Distributions
Introduction to Probability Distributions

56m 45s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Sampling vs. Probability
0:57
Sampling
0:58
Missing
1:30
What is Missing?
3:06
Insight: Probability Distributions
5:26
Insight: Probability Distributions
5:27
What is a Probability Distribution?
7:29
From Sample Spaces to Probability Distributions
8:44
Sample Space
8:45
Probability Distribution of the Sum of Two Die
11:16
The Random Variable
17:43
The Random Variable
17:44
Expected Value
21:52
Expected Value
21:53
Example 1: Probability Distributions
28:45
Example 2: Probability Distributions
35:30
Example 3: Probability Distributions
43:37
Example 4: Probability Distributions
47:20
Expected Value & Variance of Probability Distributions

53m 41s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Discrete vs. Continuous Random Variables
1:04
Discrete vs. Continuous Random Variables
1:05
Mean and Variance Review
4:44
Mean: Sample, Population, and Probability Distribution
4:45
Variance: Sample, Population, and Probability Distribution
9:12
Example Situation
14:10
Example Situation
14:11
Some Special Cases…
16:13
Some Special Cases…
16:14
Linear Transformations
19:22
Linear Transformations
19:23
What Happens to Mean and Variance of the Probability Distribution?
20:12
n Independent Values of X
25:38
n Independent Values of X
25:39
Compare These Two Situations
30:56
Compare These Two Situations
30:57
Two Random Variables, X and Y
32:02
Two Random Variables, X and Y
32:03
Example 1: Expected Value & Variance of Probability Distributions
35:35
Example 2: Expected Values & Standard Deviation
44:17
Example 3: Expected Winnings and Standard Deviation
48:18
Binomial Distribution

55m 15s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Discrete Probability Distributions
1:42
Discrete Probability Distributions
1:43
Binomial Distribution
2:36
Binomial Distribution
2:37
Multiplicative Rule Review
6:54
Multiplicative Rule Review
6:55
How Many Outcomes with k 'Successes'
10:23
Adults and Bachelor's Degree: Manual List of Outcomes
10:24
P (X=k)
19:37
Putting Together # of Outcomes with the Multiplicative Rule
19:38
Expected Value and Standard Deviation in a Binomial Distribution
25:22
Expected Value and Standard Deviation in a Binomial Distribution
25:23
Example 1: Coin Toss
33:42
Example 2: College Graduates
38:03
Example 3: Types of Blood and Probability
45:39
Example 4: Expected Number and Standard Deviation
51:11
IX. Sampling Distributions of Statistics
Introduction to Sampling Distributions

48m 17s

Intro
0:00
Roadmap
0:08
Roadmap
0:09
Probability Distributions vs. Sampling Distributions
0:55
Probability Distributions vs. Sampling Distributions
0:56
Same Logic
3:55
Logic of Probability Distribution
3:56
Example: Rolling Two Die
6:56
Simulating Samples
9:53
To Come Up with Probability Distributions
9:54
In Sampling Distributions
11:12
Connecting Sampling and Research Methods with Sampling Distributions
12:11
Connecting Sampling and Research Methods with Sampling Distributions
12:12
Simulating a Sampling Distribution
14:14
Experimental Design: Regular Sleep vs. Less Sleep
14:15
Logic of Sampling Distributions
23:08
Logic of Sampling Distributions
23:09
General Method of Simulating Sampling Distributions
25:38
General Method of Simulating Sampling Distributions
25:39
Questions that Remain
28:45
Questions that Remain
28:46
Example 1: Mean and Standard Error of Sampling Distribution
30:57
Example 2: What is the Best Way to Describe Sampling Distributions?
37:12
Example 3: Matching Sampling Distributions
38:21
Example 4: Mean and Standard Error of Sampling Distribution
41:51
Sampling Distribution of the Mean

1h 8m 48s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Special Case of General Method for Simulating a Sampling Distribution
1:53
Special Case of General Method for Simulating a Sampling Distribution
1:54
Computer Simulation
3:43
Using Simulations to See Principles behind Shape of SDoM
15:50
Using Simulations to See Principles behind Shape of SDoM
15:51
Conditions
17:38
Using Simulations to See Principles behind Center (Mean) of SDoM
20:15
Using Simulations to See Principles behind Center (Mean) of SDoM
20:16
Conditions: Does n Matter?
21:31
Conditions: Does Number of Simulation Matter?
24:37
Using Simulations to See Principles behind Standard Deviation of SDoM
27:13
Using Simulations to See Principles behind Standard Deviation of SDoM
27:14
Conditions: Does n Matter?
34:45
Conditions: Does Number of Simulation Matter?
36:24
Central Limit Theorem
37:13
SHAPE
38:08
CENTER
39:34
SPREAD
39:52
Comparing Population, Sample, and SDoM
43:10
Comparing Population, Sample, and SDoM
43:11
Answering the 'Questions that Remain'
48:24
What Happens When We Don't Know What the Population Looks Like?
48:25
Can We Have Sampling Distributions for Summary Statistics Other than the Mean?
49:42
How Do We Know whether a Sample is Sufficiently Unlikely?
53:36
Do We Always Have to Simulate a Large Number of Samples in Order to get a Sampling Distribution?
54:40
Example 1: Mean Batting Average
55:25
Example 2: Mean Sampling Distribution and Standard Error
59:07
Example 3: Sampling Distribution of the Mean
1:01:04
Sampling Distribution of Sample Proportions

54m 37s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Intro to Sampling Distribution of Sample Proportions (SDoSP)
0:51
Categorical Data (Examples)
0:52
Wish to Estimate Proportion of Population from Sample…
2:00
Notation
3:34
Population Proportion and Sample Proportion Notations
3:35
What's the Difference?
9:19
SDoM vs. SDoSP: Type of Data
9:20
SDoM vs. SDoSP: Shape
11:24
SDoM vs. SDoSP: Center
12:30
SDoM vs. SDoSP: Spread
15:34
Binomial Distribution vs. Sampling Distribution of Sample Proportions
19:14
Binomial Distribution vs. SDoSP: Type of Data
19:17
Binomial Distribution vs. SDoSP: Shape
21:07
Binomial Distribution vs. SDoSP: Center
21:43
Binomial Distribution vs. SDoSP: Spread
24:08
Example 1: Sampling Distribution of Sample Proportions
26:07
Example 2: Sampling Distribution of Sample Proportions
37:58
Example 3: Sampling Distribution of Sample Proportions
44:42
Example 4: Sampling Distribution of Sample Proportions
45:57
X. Inferential Statistics
Introduction to Confidence Intervals

42m 53s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Inferential Statistics
0:50
Inferential Statistics
0:51
Two Problems with This Picture…
3:20
Two Problems with This Picture…
3:21
Solution: Confidence Intervals (CI)
4:59
Solution: Hypotheiss Testing (HT)
5:49
Which Parameters are Known?
6:45
Which Parameters are Known?
6:46
Confidence Interval - Goal
7:56
When We Don't Know m but know s
7:57
When We Don't Know
18:27
When We Don't Know m nor s
18:28
Example 1: Confidence Intervals
26:18
Example 2: Confidence Intervals
29:46
Example 3: Confidence Intervals
32:18
Example 4: Confidence Intervals
38:31
t Distributions

1h 2m 6s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
When to Use z vs. t?
1:07
When to Use z vs. t?
1:08
What is z and t?
3:02
z-score and t-score: Commonality
3:03
z-score and t-score: Formulas
3:34
z-score and t-score: Difference
5:22
Why not z? (Why t?)
7:24
Why not z? (Why t?)
7:25
But Don't Worry!
15:13
Gossett and t-distributions
15:14
Rules of t Distributions
17:05
t-distributions are More Normal as n Gets Bigger
17:06
t-distributions are a Family of Distributions
18:55
Degrees of Freedom (df)
20:02
Degrees of Freedom (df)
20:03
t Family of Distributions
24:07
t Family of Distributions : df = 2 , 4, and 60
24:08
df = 60
29:16
df = 2
29:59
How to Find It?
31:01
'Student's t-distribution' or 't-distribution'
31:02
Excel Example
33:06
Example 1: Which Distribution Do You Use? Z or t?
45:26
Example 2: Friends on Facebook
47:41
Example 3: t Distributions
52:15
Example 4: t Distributions , confidence interval, and mean
55:59
Introduction to Hypothesis Testing

1h 6m 33s

Intro
0:00
Roadmap
0:06
Roadmap
0:07
Issues to Overcome in Inferential Statistics
1:35
Issues to Overcome in Inferential Statistics
1:36
What Happens When We Don't Know What the Population Looks Like?
2:57
How Do We Know whether a sample is Sufficiently Unlikely
3:43
Hypothesizing a Population
6:44
Hypothesizing a Population
6:45
Null Hypothesis
8:07
Alternative Hypothesis
8:56
Hypotheses
11:58
Hypotheses
11:59
Errors in Hypothesis Testing
14:22
Errors in Hypothesis Testing
14:23
Steps of Hypothesis Testing
21:15
Steps of Hypothesis Testing
21:16
Single Sample HT ( When Sigma Available)
26:08
Example: Average Facebook Friends
26:09
Step1
27:08
Step 2
27:58
Step 3
28:17
Step 4
32:18
Single Sample HT (When Sigma Not Available)
36:33
Example: Average Facebook Friends
36:34
Step1: Hypothesis Testing
36:58
Step 2: Significance Level
37:25
Step 3: Decision Stage
37:40
Step 4: Sample
41:36
Sigma and p-value
45:04
Sigma and p-value
45:05
On tailed vs. Two Tailed Hypotheses
45:51
Example 1: Hypothesis Testing
48:37
Example 2: Heights of Women in the US
57:43
Example 3: Select the Best Way to Complete This Sentence
1:03:23
Confidence Intervals for the Difference of Two Independent Means

55m 14s

Intro
0:00
Roadmap
0:14
Roadmap
0:15
One Mean vs. Two Means
1:17
One Mean vs. Two Means
1:18
Notation
2:41
A Sample! A Set!
2:42
Mean of X, Mean of Y, and Difference of Two Means
3:56
SE of X
4:34
SE of Y
6:28
Sampling Distribution of the Difference between Two Means (SDoD)
7:48
Sampling Distribution of the Difference between Two Means (SDoD)
7:49
Rules of the SDoD (similar to CLT!)
15:00
Mean for the SDoD Null Hypothesis
15:01
Standard Error
17:39
When can We Construct a CI for the Difference between Two Means?
21:28
Three Conditions
21:29
Finding CI
23:56
One Mean CI
23:57
Two Means CI
25:45
Finding t
29:16
Finding t
29:17
Interpreting CI
30:25
Interpreting CI
30:26
Better Estimate of s (s pool)
34:15
Better Estimate of s (s pool)
34:16
Example 1: Confidence Intervals
42:32
Example 2: SE of the Difference
52:36
Hypothesis Testing for the Difference of Two Independent Means

50m

Intro
0:00
Roadmap
0:06
Roadmap
0:07
The Goal of Hypothesis Testing
0:56
One Sample and Two Samples
0:57
Sampling Distribution of the Difference between Two Means (SDoD)
3:42
Sampling Distribution of the Difference between Two Means (SDoD)
3:43
Rules of the SDoD (Similar to CLT!)
6:46
Shape
6:47
Mean for the Null Hypothesis
7:26
Standard Error for Independent Samples (When Variance is Homogenous)
8:18
Standard Error for Independent Samples (When Variance is not Homogenous)
9:25
Same Conditions for HT as for CI
10:08
Three Conditions
10:09
Steps of Hypothesis Testing
11:04
Steps of Hypothesis Testing
11:05
Formulas that Go with Steps of Hypothesis Testing
13:21
Step 1
13:25
Step 2
14:18
Step 3
15:00
Step 4
16:57
Example 1: Hypothesis Testing for the Difference of Two Independent Means
18:47
Example 2: Hypothesis Testing for the Difference of Two Independent Means
33:55
Example 3: Hypothesis Testing for the Difference of Two Independent Means
44:22
Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means

1h 14m 11s

Intro
0:00
Roadmap
0:09
Roadmap
0:10
The Goal of Hypothesis Testing
1:27
One Sample and Two Samples
1:28
Independent Samples vs. Paired Samples
3:16
Independent Samples vs. Paired Samples
3:17
Which is Which?
5:20
Independent SAMPLES vs. Independent VARIABLES
7:43
independent SAMPLES vs. Independent VARIABLES
7:44
T-tests Always…
10:48
T-tests Always…
10:49
Notation for Paired Samples
12:59
Notation for Paired Samples
13:00
Steps of Hypothesis Testing for Paired Samples
16:13
Steps of Hypothesis Testing for Paired Samples
16:14
Rules of the SDoD (Adding on Paired Samples)
18:03
Shape
18:04
Mean for the Null Hypothesis
18:31
Standard Error for Independent Samples (When Variance is Homogenous)
19:25
Standard Error for Paired Samples
20:39
Formulas that go with Steps of Hypothesis Testing
22:59
Formulas that go with Steps of Hypothesis Testing
23:00
Confidence Intervals for Paired Samples
30:32
Confidence Intervals for Paired Samples
30:33
Example 1: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
32:28
Example 2: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
44:02
Example 3: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means
52:23
Type I and Type II Errors

31m 27s

Intro
0:00
Roadmap
0:18
Roadmap
0:19
Errors and Relationship to HT and the Sample Statistic?
1:11
Errors and Relationship to HT and the Sample Statistic?
1:12
Instead of a Box…Distributions!
7:00
One Sample t-test: Friends on Facebook
7:01
Two Sample t-test: Friends on Facebook
13:46
Usually, Lots of Overlap between Null and Alternative Distributions
16:59
Overlap between Null and Alternative Distributions
17:00
How Distributions and 'Box' Fit Together
22:45
How Distributions and 'Box' Fit Together
22:46
Example 1: Types of Errors
25:54
Example 2: Types of Errors
27:30
Example 3: What is the Danger of the Type I Error?
29:38
Effect Size & Power

44m 41s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Distance between Distributions: Sample t
0:49
Distance between Distributions: Sample t
0:50
Problem with Distance in Terms of Standard Error
2:56
Problem with Distance in Terms of Standard Error
2:57
Test Statistic (t) vs. Effect Size (d or g)
4:38
Test Statistic (t) vs. Effect Size (d or g)
4:39
Rules of Effect Size
6:09
Rules of Effect Size
6:10
Why Do We Need Effect Size?
8:21
Tells You the Practical Significance
8:22
HT can be Deceiving…
10:25
Important Note
10:42
What is Power?
11:20
What is Power?
11:21
Why Do We Need Power?
14:19
Conditional Probability and Power
14:20
Power is:
16:27
Can We Calculate Power?
19:00
Can We Calculate Power?
19:01
How Does Alpha Affect Power?
20:36
How Does Alpha Affect Power?
20:37
How Does Effect Size Affect Power?
25:38
How Does Effect Size Affect Power?
25:39
How Does Variability and Sample Size Affect Power?
27:56
How Does Variability and Sample Size Affect Power?
27:57
How Do We Increase Power?
32:47
Increasing Power
32:48
Example 1: Effect Size & Power
35:40
Example 2: Effect Size & Power
37:38
Example 3: Effect Size & Power
40:55
XI. Analysis of Variance
F-distributions

24m 46s

Intro
0:00
Roadmap
0:04
Roadmap
0:05
Z- & T-statistic and Their Distribution
0:34
Z- & T-statistic and Their Distribution
0:35
F-statistic
4:55
The F Ration ( the Variance Ratio)
4:56
F-distribution
12:29
F-distribution
12:30
s and p-value
15:00
s and p-value
15:01
Example 1: Why Does F-distribution Stop At 0 But Go On Until Infinity?
18:33
Example 2: F-distributions
19:29
Example 3: F-distributions and Heights
21:29
ANOVA with Independent Samples

1h 9m 25s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
The Limitations of t-tests
1:12
The Limitations of t-tests
1:13
Two Major Limitations of Many t-tests
3:26
Two Major Limitations of Many t-tests
3:27
Ronald Fisher's Solution… F-test! New Null Hypothesis
4:43
Ronald Fisher's Solution… F-test! New Null Hypothesis (Omnibus Test - One Test to Rule Them All!)
4:44
Analysis of Variance (ANoVA) Notation
7:47
Analysis of Variance (ANoVA) Notation
7:48
Partitioning (Analyzing) Variance
9:58
Total Variance
9:59
Within-group Variation
14:00
Between-group Variation
16:22
Time out: Review Variance & SS
17:05
Time out: Review Variance & SS
17:06
F-statistic
19:22
The F Ratio (the Variance Ratio)
19:23
S²bet = SSbet / dfbet
22:13
What is This?
22:14
How Many Means?
23:20
So What is the dfbet?
23:38
So What is SSbet?
24:15
S²w = SSw / dfw
26:05
What is This?
26:06
How Many Means?
27:20
So What is the dfw?
27:36
So What is SSw?
28:18
Chart of Independent Samples ANOVA
29:25
Chart of Independent Samples ANOVA
29:26
Example 1: Who Uploads More Photos: Unknown Ethnicity, Latino, Asian, Black, or White Facebook Users?
35:52
Hypotheses
35:53
Significance Level
39:40
Decision Stage
40:05
Calculate Samples' Statistic and p-Value
44:10
Reject or Fail to Reject H0
55:54
Example 2: ANOVA with Independent Samples
58:21
Repeated Measures ANOVA

1h 15m 13s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
The Limitations of t-tests
0:36
Who Uploads more Pictures and Which Photo-Type is Most Frequently Used on Facebook?
0:37
ANOVA (F-test) to the Rescue!
5:49
Omnibus Hypothesis
5:50
Analyze Variance
7:27
Independent Samples vs. Repeated Measures
9:12
Same Start
9:13
Independent Samples ANOVA
10:43
Repeated Measures ANOVA
12:00
Independent Samples ANOVA
16:00
Same Start: All the Variance Around Grand Mean
16:01
Independent Samples
16:23
Repeated Measures ANOVA
18:18
Same Start: All the Variance Around Grand Mean
18:19
Repeated Measures
18:33
Repeated Measures F-statistic
21:22
The F Ratio (The Variance Ratio)
21:23
S²bet = SSbet / dfbet
23:07
What is This?
23:08
How Many Means?
23:39
So What is the dfbet?
23:54
So What is SSbet?
24:32
S² resid = SS resid / df resid
25:46
What is This?
25:47
So What is SS resid?
26:44
So What is the df resid?
27:36
SS subj and df subj
28:11
What is This?
28:12
How Many Subject Means?
29:43
So What is df subj?
30:01
So What is SS subj?
30:09
SS total and df total
31:42
What is This?
31:43
What is the Total Number of Data Points?
32:02
So What is df total?
32:34
so What is SS total?
32:47
Chart of Repeated Measures ANOVA
33:19
Chart of Repeated Measures ANOVA: F and Between-samples Variability
33:20
Chart of Repeated Measures ANOVA: Total Variability, Within-subject (case) Variability, Residual Variability
35:50
Example 1: Which is More Prevalent on Facebook: Tagged, Uploaded, Mobile, or Profile Photos?
40:25
Hypotheses
40:26
Significance Level
41:46
Decision Stage
42:09
Calculate Samples' Statistic and p-Value
46:18
Reject or Fail to Reject H0
57:55
Example 2: Repeated Measures ANOVA
58:57
Example 3: What's the Problem with a Bunch of Tiny t-tests?
1:13:59
XII. Chi-square Test
Chi-Square Goodness-of-Fit Test

58m 23s

Intro
0:00
Roadmap
0:05
Roadmap
0:06
Where Does the Chi-Square Test Belong?
0:50
Where Does the Chi-Square Test Belong?
0:51
A New Twist on HT: Goodness-of-Fit
7:23
HT in General
7:24
Goodness-of-Fit HT
8:26
Hypotheses about Proportions
12:17
Null Hypothesis
12:18
Alternative Hypothesis
13:23
Example
14:38
Chi-Square Statistic
17:52
Chi-Square Statistic
17:53
Chi-Square Distributions
24:31
Chi-Square Distributions
24:32
Conditions for Chi-Square
28:58
Condition 1
28:59
Condition 2
30:20
Condition 3
30:32
Condition 4
31:47
Example 1: Chi-Square Goodness-of-Fit Test
32:23
Example 2: Chi-Square Goodness-of-Fit Test
44:34
Example 3: Which of These Statements Describe Properties of the Chi-Square Goodness-of-Fit Test?
56:06
Chi-Square Test of Homogeneity

51m 36s

Intro
0:00
Roadmap
0:09
Roadmap
0:10
Goodness-of-Fit vs. Homogeneity
1:13
Goodness-of-Fit HT
1:14
Homogeneity
2:00
Analogy
2:38
Hypotheses About Proportions
5:00
Null Hypothesis
5:01
Alternative Hypothesis
6:11
Example
6:33
Chi-Square Statistic
10:12
Same as Goodness-of-Fit Test
10:13
Set Up Data
12:28
Setting Up Data Example
12:29
Expected Frequency
16:53
Expected Frequency
16:54
Chi-Square Distributions & df
19:26
Chi-Square Distributions & df
19:27
Conditions for Test of Homogeneity
20:54
Condition 1
20:55
Condition 2
21:39
Condition 3
22:05
Condition 4
22:23
Example 1: Chi-Square Test of Homogeneity
22:52
Example 2: Chi-Square Test of Homogeneity
32:10
XIII. Overview of Statistics
Overview of Statistics

18m 11s

Intro
0:00
Roadmap
0:07
Roadmap
0:08
The Statistical Tests (HT) We've Covered
0:28
The Statistical Tests (HT) We've Covered
0:29
Organizing the Tests We've Covered…
1:08
One Sample: Continuous DV and Categorical DV
1:09
Two Samples: Continuous DV and Categorical DV
5:41
More Than Two Samples: Continuous DV and Categorical DV
8:21
The Following Data: OK Cupid
10:10
The Following Data: OK Cupid
10:11
Example 1: Weird-MySpace-Angle Profile Photo
10:38
Example 2: Geniuses
12:30
Example 3: Promiscuous iPhone Users
13:37
Example 4: Women, Aging, and Messaging
16:07
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Statistics
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (2)

0 answers

Post by Carol Taylor on September 4, 2015

I need help with this question N=6 scores has EX =48 What is the population mean?

0 answers

Post by Brijesh Bolar on August 17, 2012

I like the double click analogy you have used to unpack an equation.

Expected Value & Variance of Probability Distributions

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Roadmap 0:06
    • Roadmap
  • Discrete vs. Continuous Random Variables 1:04
    • Discrete vs. Continuous Random Variables
  • Mean and Variance Review 4:44
    • Mean: Sample, Population, and Probability Distribution
    • Variance: Sample, Population, and Probability Distribution
  • Example Situation 14:10
    • Example Situation
  • Some Special Cases… 16:13
    • Some Special Cases…
  • Linear Transformations 19:22
    • Linear Transformations
    • What Happens to Mean and Variance of the Probability Distribution?
  • n Independent Values of X 25:38
    • n Independent Values of X
  • Compare These Two Situations 30:56
    • Compare These Two Situations
  • Two Random Variables, X and Y 32:02
    • Two Random Variables, X and Y
  • Example 1: Expected Value & Variance of Probability Distributions 35:35
  • Example 2: Expected Values & Standard Deviation 44:17
  • Example 3: Expected Winnings and Standard Deviation 48:18

Transcription: Expected Value & Variance of Probability Distributions

Hi and welcome to www.educator.com.0000

We are going to talk about expected value and variance of probability distribution.0001

Just a brief recap of discrete versus continuous random variable is0005

you need to understand random variable in order for us to move on to understanding expected value and variance.0016

Then we are going to do a brief mean and variance review to just think about all the different kinds of mean and variance we have learned so far.0023

We are going to talk about the new versions of mean and variance, mean and variance probability distribution.0033

We are going to talk about three special situations, linear transformations of the random variable x and what happens to mean and variance,0042

the sum of n independent values of x and the sum of difference of independent values of X and Y.0050

Drawn from two different random variable pools.0059

There are discrete versus continuous random variables so far we have been talking about random variable like x.0062

x could be sum of 2 die, x could be the sum of 2 TV show, x could just be something simple like number of people in a room.0073

It does not really matter what x is.0118

X is whatever random variable and then whatever that random variable is you have the probability of those x in your probability distribution.0120

That is what we have been looking at are discrete random variable.0130

These are called discrete, their numbers like when we have the sum of two die, their numbers like 2, 120137

but they are discrete they are not continuous because having 1.7 as an expected value that means that if we had a distribution that look at 1, 2, 3, 4 these are our x.0147

Having 1.7 although there is meaning it is like on average like somewhere around here for our distribution you are getting 1.7.0170

Is it not possible to get that 1.7.0178

Let us say we have the sum of two die, let us say that ended up an expected value of uneven set of two die and end up being 4.7.0184

That is a perfectly fine expected value but can you ever roll an actual sum of 4.7,0197

no that is impossible and because of that those are called discrete random variable.0206

There are bins that you have to follow and those are the only possible values for that random variable.0212

Now random variables like height are continuous because in something like average height or the sum of heights.0222

It is not that I have particular values that you can have.0232

You could have all kinds of different values.0236

It will be less likely than others but there are infinite number of possibilities in between 2 discrete sums.0239

We are only trying to be talking about discrete random variable and the probability distributions.0248

So far of discrete random variables.0257

All the stuff we have learned about expected value and all that stuff, it only works for discrete random variable.0267

Later on will learn about how to deal with continuous random variables and that can be exciting if it can open up all the world for us.0274

Given that, now we could do a brief review of mean and variance and we talk about samples, population and we are adding on probability distribution.0283

With samples remember the mean is going to be symbolized by X bar while in a population the mean is symbolized by mu.0296

Here the mean is symbolized by the expected value of x or mu sub x.0304

First off, you know that the symbols are different, but the sample and population what you end up having to do is summing all the x.0313

I=1 to n then dividing by how many number you have.0324

In populations, what you end up doing is basically the same thing except you just change the notation slightly0339

so that it reflects that you are doing this for the entire population, not just your little subset sample.0348

Here we have x sub i just like before but instead i going from 1 to n, we have from 1 to N and divide by N and the number of all different people in your population.0357

At first glance you might think what we learned about expected value might look a little bit different to you0377

because you have the sum of all these different values of all these different x times the probability of those x.0385

you might think we are not like adding them up and dividing by n, but in fact we are because let us unpack the p(x).0396

If you think about the probability of X, think about double-clicking on it and we open it up what is actually inside?0405

What is actually inside is the number of x, the number of times where you will get x out the total frequency.0415

We are looking at something divided by the total frequency of however many it is but we are also weighting it by the number of x.0444

Before we had things like 1 out of 36, there are 36 possible outcome and the number of times where you will get 1-1 is just 1out of 36.0456

We can change that into a probability or we could unpack it as the number of times you will get x out of the total frequency, the total number of outcomes.0470

It will be a little bit more transparent.0490

In that way we have in your weighting x by however frequent it is and dividing by the total.0493

That is very similar to our notion of mean like all these x divide by some total number of something.0503

In that way we have that idea still present here we just have to unpack it a little.0513

Let us talk about variance.0518

Before what we want from variance was something like average distance away from the mean.0521

We want these points and we want to know through the average distance away from the mean and we could not just look at deviations away from the mean0530

because when you have x - X bar sometimes we positive and sometimes the only negative, so that is adding up to 0.0539

We will square everything right.0548

Here and in sample the variance we called s2 actually.0553

Here we would call this Sigma2 and let us start here.0562

The Sigma2 what we are going to do is just take all of the difference squared deviations away from the mean.0568

Take all these x and get all their squared deviations away from the mean, and then divide by N, how many x we have.0580

It is the same i from 1 to N.0595

When we looked at variance and more importantly standard deviation, which is going to be the square root of the things,0598

what we wanted was the same idea, so squared deviation and this time we use x bar2.0610

Now we need to do a little bit of a correction and so we divide by n-1.0620

In order to get standard deviation we just square root both sides and we get as square roots of x sub i – x bar2 over n -1.0629

You could also square that square root both sides here and we get Sigma equals the square root of sum of squared deviation away from mu this time over n.0649

Let us sample population, but now let us talk about in probability distribution.0669

Just like how here you see how mu is like population because probability distribution theoretical, so we use those Greek letters.0678

Here we will use sigma, square that for variance but we will also put that x there.0694

Instead of expected value we call this variance.0702

You could write it as bar x or sigma sub x2.0705

If you break this down you could see the similarity but once again I will put it in the probability form where now you are summing the squared deviation.0714

It is x - and imagine what you would pick here, you would not put mu or X bar, you would put it corresponding mean, which is mu sub x2 times the P(x).0724

You multiply all these together.0755

It is the same thing if you break it apart you could see sort of this piece and this piece.0757

This piece is very similar here and we are using probability to weight each x and then divided by the total number of outcomes.0764

You could see this part is very similar to these parts and once again you could break down p(x)0775

to be in a number of outcomes that look like X over the total number of outcomes.0786

You could break it down, but here I'm going to write the standard deviation form.0794

All you do is square root both sides, so it is not just Sigma, but Sigma sub x will be equal to the square root of this whole thing.0800

Sum of X – mu sub x2 × p(x).0812

You can see that there are some real close similarities, but there are some subtle differences now too and I should say that still for discrete random variable.0824

This is in the case where X is the discrete variable.0844

Let us see some example situation.0849

We have seen this situation in the lesson previous.0854

At the state fair you can play fish for cash, a game of chance that cost $1 to play.0857

You are going to fish out a card that have dollar amount that you of one from a giant fishbowl and here is the probability distribution0861

and all these different potential winnings and the probability of those winnings.0866

I put those here and before we look at how to find expected value and now we know how to find the standard deviation of these winnings.0875

We know the formula that we could use and we could think about what the idea is.0892

If expected value, if this is roughly the mean of the probability distribution0895

and that is over time, over many, many cases this will be the mean over the mean of winning.0908

We can think about what variance of x means.0921

That means what is the spread around that mean?0926

If we have large variance it means that there is lots of spread around it.0930

There are small variance that is very consistent around that means right.0937

You can think of this as the spread of the probability distribution, the mean or center.0941

We are getting at these same concepts again like shape, center, spread.0951

Here we have center and spread but now we are not just talking about distribution we are talking specifically about probability distribution.0957

We could find the variance of this probability distribution if we wanted to.0963

Let us talk about some special cases.0973

There are going to be some cases where you have a very similar setup to the ones that we have just discussed where you know you need a probability distribution.0977

There are going to be some subtle changes.0988

One example is when you have some random variable, like winnings.0992

But these winnings or this random variable is transformed linearly, somehow.0999

Remember linear transformations are whenever you add a constant or subtract a constant either way, or if you multiply or divide by constant.1005

Those are both linear transformation and doing some combination of the two that it still linear transformation.1016

An example situation might be something like this.1023

You have that same fish for cash game, but they have a special day where they have a promotion1027

where whatever you get you pick at random you get triple the value for that day.1032

What would be the expected value of that game?1036

All information you need is actually there.1040

We are going to talk about how to find expected value and variance for this kind of situation.1044

Another special case is if you have an independent value of x and their sum together.1050

For instance if you play that fish for cash game, but you buy 3 ticket, you played three times in a row and so you pick 3 ticket at random and their values are summed.1057

In that case you have an independent values, n(3) independent events of this random variable, winnings1067

and their sum together and you want to know what expected value should I have for this kind of situation and what is the variance?1085

We are going to talk about that.1097

And finally the last special case for you to talk about is when you have an independent value from x and another one from y1099

and then you either sum one together or subtract one from the other.1109

Some kind of combination of that.1114

In this case it might be something like there is 2 fish for cash booths, it was 2 games that are similar1116

and you buy a ticket from one booth and you buy ticket from another booth.1124

And you know the probability distributions of both X and Y separately.1129

What is expected value of this sum together or are subtracted from each other?1134

What is the variance?1140

These are the three different kinds of special cases that we are able to figure out just from having all of the same information we have had so far.1144

We could just do a little bit of reasoning around these issues and I will come to some shortcuts.1154

First let us talk about linear transformation.1163

A linear transformation is whenever you have some x, so this is my old X, my old winnings value and you might multiply or divide it by some constant.1166

We will call it d here, it is just traditionally called d or we might add or subtract a constant here.1176

I just use that addition sign because it could always be that c is negative.1187

In order to get my mu x I multiply by something, I divided by something, I add something to it.1191

And then I get my new x as long as C and D are the same for every single value of X it is considered a linear transformation.1203

Given these kinds of linear transformations, what happens to the mean and variance of the probability distribution?1212

If you think about it, let us think about the concrete case of I picked a ticket and I get three times the value.1220

You would expect that the mean which shift upwards and now you can win that money even though you spend a dollar you could win more money.1229

If we need this value smaller somehow like what they are either severely for the game,1243

but let us say whatever to get you pulled out you would only receive half for that value but it could happen.1253

What would happen to the mean there?1267

The mean should probably shift down a little bit.1271

When you look at the mu, here we have old mu, this is old mu, old expected value.1274

What should we do to this old expected value to reflect the changes that are going on in our underlying x, our underlying value?1289

Here is what we do, in order to find the new mu and we will call this mu(c + dx) or we could have also called it mu(x mu).1307

To find this new one, what we would still actually sort of simple.1321

We will do the same transformation to the old mu.1327

Whatever you did to the individual values, the individual x you do to the mu and you got your new mu.1332

That is a nice way about the mu directly reflects the changes to the transformations to that individual values that they came from.1346

How about variance?1357

The old variance looks just like this, this is old bar x.1359

What should we do to all variance in order to transform it into the new variance?1375

Let me put a line here so that we can keep this separate.1382

Here you are not going to add c necessarily because adding a constant does not necessarily make the spread wider or anything like that.1388

We can now actually ignore the constant but only do now is let me write the new version.1402

Here, the new version has C + dx or you could think of that as Sigma2 x mu.1412

The mu variance of x would now be it could ignore the c part, all we use is the D.1423

This is the old variance and so here we multiply by d2 and so what we are seeing is that the variance for actual of d1433

is no matter whether d is negative or positive the variance gets larger when you do these linear transformation multiplicative transformation of your random variable.1451

Just to round it out, if you wanted to know standard deviation, so if you wanted C+ dX, this is standard deviation it is not squared anymore.1471

If you wanted that you would just square this and that.1485

That would just be d but the positive version of d, absolute value of d × sigma.1488

What we see is roughly the same idea as here, except everything has been squared root in.1500

When you transformation are pretty straightforward.1514

When it is the new mu you do the same transformation.1520

When it is variance, the new variance you multiplied by a d2 and you ignore the c.1524

You do not need c in order to look at spread.1535

What if we have n independent values of x?1538

This is the case where picking out let us say three separate independent tickets/1546

Let us say there is like 1 million tickets in there.1555

We can almost treat each picking of the ticket as an independent event.1558

We have n independent value of X, the same random variable.1564

The same goal of winning and what happens to the mu sub x and sigma sub x2 when we add these three separate values together.1571

Let us think about this mu sub x, this is the expected value of just x by itself.1586

We do not know which one, the first one it is just the expected value of x.1601

Presumably each independent event has the same expected value of x.1614

The first one, second one, third one.1621

When you add them together, it is sort of like here is the average and let us say you take it out three different things1624

and you add them together it will be like multiplying the average by three to get an estimate of your new mu.1633

Here what is the expected value of now it is not just x but it is x1 + x2 + x3 and assuming that1640

these are all independent but that will just be n however many times it is it could be 4, 5 tickets.1661

N × my expected value for each event.1671

Mu sub x and we could have written as mu sub x + mu sub x + mu sub x, but in this way we are just noting that it is however many independent values you have.1678

It does not have to be 3, it could be 4, it could be 10, could be 4.5, it does not matter.1701

We could just put it up as n.1708

That makes life easier.1712

It is a little bit of jump but it is very reasonable.1714

We can think about what is the variance of this x1 + x2 + x3?1718

Let us think about this.1729

Here we did not add any constant and if it increases by this match will probably just increase the variance as well.1733

But not increase the variance as much as when you have one value multiplied by three here were adding three separate values that roughly have the same variance.1747

This should probably just be something like n × Sigma sub x2.1760

It is almost like for each of these are just adding in that variance.1768

It is just n × variance and then when you look at the standard deviation, once you know this it is a very simple if the square root of n × the old standard deviation.1774

In this one is actually a little bit simpler to reason through because you can think of it as they are adding in these values.1797

You add in the expected value and you add in the variance.1805

It is very straightforward.1813

Notice that here, this expected value is very similar to if you had taken a card and multiply that by three.1816

The expected value is the same but the variance is actually slightly different.1833

Here the variance is a little bit less because before it was d2 but here it is just n.1839

The variance is a little bit less in this case, than in the case of linear transformation.1846

I want to make that little bit more clear.1854

Here is the x are transforms linearly but here you are not transforming the x themselves.1861

You are adding together three independent events and because of that here you can have less increase in variance.1869

Here are the more of the increase in variance and so because of that, although this looks very similar1878

because it is whatever you however many times you get to put it back there.1886

Here you square that d and here you just have n times but notice that both here and here the expected value are the same because here c is 0 and D is going to be 3.1895

Here n is 3 so the expected values are trying to be the same.1908

Let us go on to a situation where you have 2 random variable.1918

We already have been looking at one random variable so far, but now we have 2 random variables.1929

You can think of them like 2 separate fishbowls and each has a different probability distribution of winnings.1934

Each of them have these two distributions and I want to know if I take 1 from here and 1 from here what is the expected value over time of that sum or the difference?1952

It also works for difference right.1968

This one is pretty straightforward.1972

If you have mu(x) + y because I am adding together 1 from x and 1 from y1974

the expected value of this new sum is going to be the expected value of x + expected value of y.1982

And if I wanted to do x – y, I want to come it in that way, as you could guess mu sub x – mu sub y.1991

It is the difference of those expected value and the way you could think about it like this is when you pick out x, the expected value of that x is mu sub x.2008

That is why they call it expected value.2016

Instead of putting in just x we could plug-in for the expected value of x and here instead of putting in y2018

we could plug in the expected value of y and that is our most high probability estimate of what x + y is.2032

The same when subtracting X and Y but variance is little bit different because variance does not necessarily work in that parallel way.2041

Here we have Sigma( x + y)2 so the variance of X + y that is pretty straight forward.2053

It is the variance of 1 + the variance of the other, straightforward.2066

This is sort of the unexpected variance and it makes sense.2072

When you have x - y you want to subtract the variance.2079

You are actually reducing variance by doing this transformation.2085

The variance actually will be the same as up here because no matter what you are going from two different pools,2090

two different distributions or 2 different sources of that randomness.2097

That spread is only going to increase.2105

These two are the same, but this is only the case all of this only works if x and y are independent events.2107

If they depend on each other in any way that you can count on this.2126

Let us get into some examples.2133

Here is example 1, at the state fair you can play fish for cash, a game of chance cost one dollar to play.2140

You will fish on a card that had a dollar amount that you have one from the giant fishbowl.2146

They are having a special where you draw a ticket they will triple the value printed on it.2152

What is expected value and variance of the promotional game?2157

If you download the example and you go to example 1, I put the original game on here with all the winnings, including 0 and the probability of those winnings.2161

Here we want to sum these up to make sure it adds up to 1 so that we know that our probably distribution is complete.2176

Let us talk about just plain old regular expected value of the old original game.2185

I just multiplied x by the p(x).2193

It is the contribution of each value of this random variable and then expected value in total is just that sum.2199

This we have done before.2215

The reason I want to do this is I want to show you how to calculate the variance.2217

Here I have standard deviation.2224

Whatever we have here we have to square root it.2227

Let me just put a mu to myself here because I am going to need to square root it.2230

Let us think about how to calculate variance here.2234

This is the expected value, the mean but what is the spread around that mean.2239

What we are going to have to know what x is over here, winnings – the mean, the expected value squared.2245

The squared deviations away from the expected value.2262

To all of those I am going to multiply, let me put this in a parenthesis.2266

I am going to multiply the probability of that particular x.2282

Here is the squared deviation and our probability tells how much should these deviations count.2288

I will just copy and paste that all the way down.2305

What we do here is we need to square root the sum of all of these.2311

That is the spread around the mean.2316

Let us think about this, if our expected values is $.60, and the squared of that is around $4 because it is standard deviation.2334

That means if we go to the negative side it is going to be negative numbers.2346

You cannot necessarily pull the cards that says give me $3.2351

That does not make any sense.2360

What we are seeing is this number is large because you did not pull by this big value.2362

That 900 number.2370

This is saying it is probably skewed on the right side towards the larger numbers.2373

There is a long tail there.2381

Let us get on to our problem.2383

Now this is the problem we are talking about the new probability of winning.2386

Here is the old probability of winnings.2390

Actually, the probabilities do not change.2394

Your chance of drawing a 0 card remains the same, but the value of those winnings have changed since it is now 0 × 3 which is still 0 unfortunately.2397

All these other ones you can now win up to $2700 in this game.2412

This game is a good deal.2419

Well, let us see.2421

Let us find the expected value.2423

Here we are no longer using the old x that we are using the new x and this new X is 3x.2425

Our d is 3.2433

The new winnings × the probability of those new winnings.2437

I am just going to copy and paste that all the way down and then sum that up and get a $1.80.2443

This new game is a better deal because overtime iIf you spent for every dollar you spend you get $1.80 back.2457

Not on any particular draw of a card will you get $1.80, but if you play this game a hundred times and spend $100 on average you will get an extra $8.2465

Let us also see if we can find that using our shortcut.2480

Before it was mu sub x and it said okay if you want to transform all your values by multiplying it by d and all you do is you multiply your old expected value like d.2488

If that is the case yes it is.2508

I could do this old expected value times 3 and get that same value.2510

You could use that shortcut.2518

Now let us calculate standard deviation.2522

We know that in order to calculate variance, it is d2 times variance, but here we have standard deviation, it is just d times standard deviation.2525

Let us see if that works.2540

D × stdev we should get $12.2 and the variance has gotten bigger because the spread got bigger.2540

Now you can win all the way up to $2700 or 0.2550

We increase that spread.2556

Great, but we can also check and see if this works sort of conceptually.2558

Here we are going to work to take the value of the new winnings - this expected value2.2565

We want to lock this one down because our expected value will change and wanted to take all of take that square deviation and multiply it by the probability.2580

We could just copy and paste this all the way down.2600

Here remember we need to find a standard deviation, rather than variance.2606

We need to square root the sum of all of these.2611

You can think of these multiplying my p but I already done the division for you.2617

And we get about 1212.2627

If we had looked at this in a larger with more decimal point we would see the exact number.2633

Why not, it works, our shortcuts work and also the regular old formula for variance also works.2644

Example 2, suppose you buy three tickets from fish for cash what is expected value of your total winnings?2652

What about the standard deviation and which standard deviation is higher playing the game three times by tripling the value of one play.2666

We know that this is the situation where we get three independent events and then we add them together.2678

That is like estimating the first x we estimate that to be expected value.2687

The second x we estimate that to be the expected value.2693

The third x, we expect that to be the value.2697

That is going to be the expected value, the mu (x1 + x2 + x3).2700

I'm just going to shorten that to be mu(sum), whatever the sum is.2719

It going to be n times the old expected value.2723

Previously our expected value was $.60 and our n is 3, this is $1.80 and that is the same as before.2731

We have established that already these two situations have very similar expected value.2745

What about standard deviation.2751

Well, the standard deviation of sum that is going to be my mu(x) is the square root of n times whatever the standard deviation was before.2754

That is going to be the square root of 3 times and let us look up what our standard deviation would be $4.04.2774

Let me just use the line of my Excel here just to calculate that the square root of 3, you could feel free to use a calculator times 4.04 and that is going to be 6.98.2790

We saw that in the previous it is tripling the value on, that standard deviation of 3x that was $12.12.2818

Which standard deviation is higher?2837

This one or this one?2840

Well, it is certainly the one.2842

Why is that? We expanded the values right of the x now you win up to $2700 in one play and the chance of that has not changed.2844

Whereas here if you pick out three cards there is a very slim chance you get 3 900 cards.2861

That probability way out there, it is not likely in this case it.2871

It is more likely than in the situation, so it makes sense that here we would stretched out the values.2878

We have a stretch of values as much, but notably we have increased the standard deviation from the original game.2885

Example 3, these are two booths own by Amos and body with similar games to the fish for cash game.2897

Amos booth has an expected value of .50 with the standard deviation of .25.2906

Bobbie’s booth has an expected value of .75 and a standard deviation of .32, not counting the cost of the ticket, which I presume is the dollar.2913

What are your total expected winnings and what is the standard deviation?2923

I am going to say where your total expected winnings if you play each game ones so that you have to add together those 2.2928

Let me make sure I have the Excel handy for later.2945

Let us think about this first.2956

What we want as we have bodies that Amos game and Bobbie’s game and we trigger winnings from both of them and add them together.2959

We have A + B and we want to know what is expected value of A + B.2970

We know the mu (A+ B) = mu(A) + mu(B).2977

We have mu(A and B).2985

Expected value of Amos booth is 50% and expected value of Bonnie’s booth is .75 and we add that together the new mu is $1.25.2987

That is good news only if you just count the fact that you spent $2 to win $1.25.3004

Not good for you.3015

It is good for Amos and Bobby.3017

What is the standard deviation of this?3019

We actually do not know directly the standard deviation formula .3022

We could actually derive it from what we do know.3031

We do know variance.3033

We know the variance if we add together the variance of A, if we want the variance of A and B3035

added together then all we do is add the variance of A to the variance of B.3046

Keep writing A instead of sigma.3052

It is very similar.3060

This is our formula for variance, but it is asking for standard deviation.3062

We might just square root these sides and we know these values already.3070

We do not know standard deviation and we do not know variance.3077

As we only know the standard deviation but we know how to get variance.3084

You will have to take the square root of Amos standard deviation.3087

In order to find variance and I have to square that.3096

I do not need this parenthesis anymore.3104

I will just square that first and add that to Bobby's standard deviation2 in order to get variance.3111

The reason we have to do this first is that the square root of this sum is not going to be .25 +.32.3124

There is order of operations.3134

We have to do the squares first before adding them together and if you do not that is going to change the value.3137

Let us see what we get.3146

I am just going to use one of these rows to help me out here.3149

Just calculate something.3154

Here I am going to write square root of .252 + .322 and the nice thing is that Excel knows order of operations.3156

Excel know that it need to do the exponents first and then add them together then square root of all of that sum.3176

We get .406 that is our new standard deviation.3195

It is larger than the old one and that makes sense because we are increasing variance because we are adding things together.3207

Educator®

Please sign in for full access to this lesson.

Sign-InORCreate Account

Enter your Sign-on user name and password.

Forgot password?

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Sign up for Educator.com

Membership Overview

  • Unlimited access to our entire library of courses.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lesson files for programming and software training practice.
  • Track your course viewing progress.
  • Download lecture slides for taking notes.