Enter your Sign on user name and password.

Forgot password?
  • Follow us on:

In Educator's General Statistics course, Dr. Ji Son covers information applicable for both high school and college statistics courses. She teaches through a combination of equations, diagrams, and relevant examples. Dr. Son also uses Excel to breakdown the difficult concepts of statistics into understandable and memorable ideas. Topics include everything from Central Tendency and Normal Distribution to Correlation, Probability, and Hypothesis Testing. Dr. Son has a Ph.D. in Psychology and Cognitive Science and is a published researcher on how people learn and apply abstract concepts. Excel files and data used in lessons are downloadable so students can follow along.

Loading video...
expand all   collapse all
I. Introduction
  Descriptive Statistics vs. Inferential Statistics 25:31
   Intro 0:00 
   Roadmap 0:10 
    Roadmap 0:11 
   Statistics 0:35 
    Statistics 0:36 
   Let's Think About High School Science 1:12 
    Measurement and Find Patterns (Mathematical Formula) 1:13 
   Statistics = Math of Distributions 4:58 
    Distributions 4:59 
    Problematic… but also GREAT 5:58 
   Statistics 7:33 
    How is It Different from Other Specializations in Mathematics? 7:34 
    Statistics is Fundamental in Natural and Social Sciences 7:53 
   Two Skills of Statistics 8:20 
    Description (Exploration) 8:21 
    Inference 9:13 
   Descriptive Statistics vs. Inferential Statistics: Apply to Distributions 9:58 
    Descriptive Statistics 9:59 
    Inferential Statistics 11:05 
   Populations vs. Samples 12:19 
    Populations vs. Samples: Is it the Truth? 12:20 
    Populations vs. Samples: Pros & Cons 13:36 
    Populations vs. Samples: Descriptive Values 16:12 
   Putting Together Descriptive/Inferential Stats & Populations/Samples 17:10 
    Putting Together Descriptive/Inferential Stats & Populations/Samples 17:11 
   Example 1: Descriptive Statistics vs. Inferential Statistics 19:09 
   Example 2: Descriptive Statistics vs. Inferential Statistics 20:47 
   Example 3: Sample, Parameter, Population, and Statistic 21:40 
   Example 4: Sample, Parameter, Population, and Statistic 23:28 
II. About Samples: Cases, Variables, Measurements
  About Samples: Cases, Variables, Measurements 32:14
   Intro 0:00 
   Data 0:09 
    Data, Cases, Variables, and Values 0:10 
    Rows, Columns, and Cells 2:03 
    Example: Aircrafts 3:52 
   How Do We Get Data? 5:38 
    Research: Question and Hypothesis 5:39 
    Research Design 7:11 
    Measurement 7:29 
    Research Analysis 8:33 
    Research Conclusion 9:30 
   Types of Variables 10:03 
    Discrete Variables 10:04 
    Continuous Variables 12:07 
   Types of Measurements 14:17 
    Types of Measurements 14:18 
   Types of Measurements (Scales) 17:22 
    Nominal 17:23 
    Ordinal 19:11 
    Interval 21:33 
    Ratio 24:24 
   Example 1: Cases, Variables, Measurements 25:20 
   Example 2: Which Scale of Measurement is Used? 26:55 
   Example 3: What Kind of a Scale of Measurement is This? 27:26 
   Example 4: Discrete vs. Continuous Variables. 30:31 
III. Visualizing Distributions
  Introduction to Excel 8:09
   Intro 0:00 
   Before Visualizing Distribution 0:10 
    Excel 0:11 
   Excel: Organization 0:45 
    Workbook 0:46 
    Column x Rows 1:50 
    Tools: Menu Bar, Standard Toolbar, and Formula Bar 3:00 
   Excel + Data 6:07 
    Exce and Data 6:08 
  Frequency Distributions in Excel 39:10
   Intro 0:00 
   Roadmap 0:08 
    Data in Excel and Frequency Distributions 0:09 
   Raw Data to Frequency Tables 0:42 
    Raw Data to Frequency Tables 0:43 
    Frequency Tables: Using Formulas and Pivot Tables 1:28 
   Example 1: Number of Births 7:17 
   Example 2: Age Distribution 20:41 
   Example 3: Height Distribution 27:45 
   Example 4: Height Distribution of Males 32:19 
  Frequency Distributions and Features 25:29
   Intro 0:00 
   Roadmap 0:10 
    Data in Excel, Frequency Distributions, and Features of Frequency Distributions 0:11 
   Example #1 1:35 
    Uniform 1:36 
   Example #2 2:58 
    Unimodal, Skewed Right, and Asymmetric 2:59 
   Example #3 6:29 
    Bimodal 6:30 
   Example #4a 8:29 
    Symmetric, Unimodal, and Normal 8:30 
    Point of Inflection and Standard Deviation 11:13 
   Example #4b 12:43 
    Normal Distribution 12:44 
   Summary 13:56 
    Uniform, Skewed, Bimodal, and Normal 13:57 
   Sketch Problem 1: Driver's License 17:34 
   Sketch Problem 2: Life Expectancy 20:01 
   Sketch Problem 3: Telephone Numbers 22:01 
   Sketch Problem 4: Length of Time Used to Complete a Final Exam 23:43 
  Dotplots and Histograms in Excel 42:42
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Previously 1:02 
    Data, Frequency Table, and visualization 1:03 
   Dotplots 1:22 
    Dotplots Excel Example 1:23 
   Dotplots: Pros and Cons 7:22 
    Pros and Cons of Dotplots 7:23 
    Dotplots Excel Example Cont. 9:07 
   Histograms 12:47 
    Histograms Overview 12:48 
    Example of Histograms 15:29 
   Histograms: Pros and Cons 31:39 
    Pros 31:40 
    Cons 32:31 
   Frequency vs. Relative Frequency 32:53 
    Frequency 32:54 
    Relative Frequency 33:36 
   Example 1: Dotplots vs. Histograms 34:36 
   Example 2: Age of Pennies Dotplot 36:21 
   Example 3: Histogram of Mammal Speeds 38:27 
   Example 4: Histogram of Life Expectancy 40:30 
  Stemplots 12:23
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   What Sets Stemplots Apart? 0:46 
    Data Sets, Dotplots, Histograms, and Stemplots 0:47 
   Example 1: What Do Stemplots Look Like? 1:58 
   Example 2: Back-to-Back Stemplots 5:00 
   Example 3: Quiz Grade Stemplot 7:46 
   Example 4: Quiz Grade & Afterschool Tutoring Stemplot 9:56 
  Bar Graphs 22:49
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:08 
   Review of Frequency Distributions 0:44 
    Y-axis and X-axis 0:45 
    Types of Frequency Visualizations Covered so Far 2:16 
    Introduction to Bar Graphs 4:07 
   Example 1: Bar Graph 5:32 
    Example 1: Bar Graph 5:33 
   Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs? 11:07 
    Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs? 11:08 
   Example 2: Create a Frequency Visualization for Gender 14:02 
   Example 3: Cases, Variables, and Frequency Visualization 16:34 
   Example 4: What Kind of Graphs are Shown Below? 19:29 
IV. Summarizing Distributions
  Central Tendency: Mean, Median, Mode 38:50
   Intro 0:00 
   Roadmap 0:07 
    Roadmap 0:08 
   Central Tendency 1 0:56 
    Way to Summarize a Distribution of Scores 0:57 
    Mode 1:32 
    Median 2:02 
    Mean 2:36 
   Central Tendency 2 3:47 
    Mode 3:48 
    Median 4:20 
    Mean 5:25 
   Summation Symbol 6:11 
    Summation Symbol 6:12 
   Population vs. Sample 10:46 
    Population vs. Sample 10:47 
   Excel Examples 15:08 
    Finding Mode, Median, and Mean in Excel 15:09 
   Median vs. Mean 21:45 
    Effect of Outliers 21:46 
    Relationship Between Parameter and Statistic 22:44 
    Type of Measurements 24:00 
    Which Distributions to Use With 24:55 
   Example 1: Mean 25:30 
   Example 2: Using Summation Symbol 29:50 
   Example 3: Average Calorie Count 32:50 
   Example 4: Creating an Example Set 35:46 
  Variability 42:40
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Variability (or Spread) 0:45 
    Variability (or Spread) 0:46 
   Things to Think About 5:45 
    Things to Think About 5:46 
   Range, Quartiles and Interquartile Range 6:37 
    Range 6:38 
    Interquartile Range 8:42 
   Interquartile Range Example 10:58 
    Interquartile Range Example 10:59 
   Variance and Standard Deviation 12:27 
    Deviations 12:28 
    Sum of Squares 14:35 
    Variance 16:55 
    Standard Deviation 17:44 
   Sum of Squares (SS) 18:34 
    Sum of Squares (SS) 18:35 
   Population vs. Sample SD 22:00 
    Population vs. Sample SD 22:01 
   Population vs. Sample 23:20 
    Mean 23:21 
    SD 23:51 
   Example 1: Find the Mean and Standard Deviation of the Variable Friends in the Excel File 27:21 
   Example 2: Find the Mean and Standard Deviation of the Tagged Photos in the Excel File 35:25 
   Example 3: Sum of Squares 38:58 
   Example 4: Standard Deviation 41:48 
  Five Number Summary & Boxplots 57:15
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Summarizing Distributions 0:37 
    Shape, Center, and Spread 0:38 
    5 Number Summary 1:14 
   Boxplot: Visualizing 5 Number Summary 3:37 
    Boxplot: Visualizing 5 Number Summary 3:38 
   Boxplots on Excel 9:01 
    Using 'Stocks' and Using Stacked Columns 9:02 
    Boxplots on Excel Example 10:14 
   When are Boxplots Useful? 32:14 
    Pros 32:15 
    Cons 32:59 
   How to Determine Outlier Status 33:24 
    Rule of Thumb: Upper Limit 33:25 
    Rule of Thumb: Lower Limit 34:16 
    Signal Outliers in an Excel Data File Using Conditional Formatting 34:52 
   Modified Boxplot 48:38 
    Modified Boxplot 48:39 
   Example 1: Percentage Values & Lower and Upper Whisker 49:10 
   Example 2: Boxplot 50:10 
   Example 3: Estimating IQR From Boxplot 53:46 
   Example 4: Boxplot and Missing Whisker 54:35 
  Shape: Calculating Skewness & Kurtosis 41:51
   Intro 0:00 
   Roadmap 0:16 
    Roadmap 0:17 
   Skewness Concept 1:09 
    Skewness Concept 1:10 
   Calculating Skewness 3:26 
    Calculating Skewness 3:27 
   Interpreting Skewness 7:36 
    Interpreting Skewness 7:37 
    Excel Example 8:49 
   Kurtosis Concept 20:29 
    Kurtosis Concept 20:30 
   Calculating Kurtosis 24:17 
    Calculating Kurtosis 24:18 
   Interpreting Kurtosis 29:01 
    Leptokurtic 29:35 
    Mesokurtic 30:10 
    Platykurtic 31:06 
    Excel Example 32:04 
   Example 1: Shape of Distribution 38:28 
   Example 2: Shape of Distribution 39:29 
   Example 3: Shape of Distribution 40:14 
   Example 4: Kurtosis 41:10 
  Normal Distribution 34:33
   Intro 0:00 
   Roadmap 0:13 
    Roadmap 0:14 
   What is a Normal Distribution 0:44 
    The Normal Distribution As a Theoretical Model 0:45 
   Possible Range of Probabilities 3:05 
    Possible Range of Probabilities 3:06 
   What is a Normal Distribution 5:07 
    Can Be Described By 5:08 
    Properties 5:49 
   'Same' Shape: Illusion of Different Shape! 7:35 
    'Same' Shape: Illusion of Different Shape! 7:36 
   Types of Problems 13:45 
    Example: Distribution of SAT Scores 13:46 
   Shape Analogy 19:48 
    Shape Analogy 19:49 
   Example 1: The Standard Normal Distribution and Z-Scores 22:34 
   Example 2: The Standard Normal Distribution and Z-Scores 25:54 
   Example 3: Sketching and Normal Distribution 28:55 
   Example 4: Sketching and Normal Distribution 32:32 
  Standard Normal Distributions & Z-Scores 41:44
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   A Family of Distributions 0:28 
    Infinite Set of Distributions 0:29 
    Transforming Normal Distributions to 'Standard' Normal Distribution 1:04 
   Normal Distribution vs. Standard Normal Distribution 2:58 
    Normal Distribution vs. Standard Normal Distribution 2:59 
   Z-Score, Raw Score, Mean, & SD 4:08 
    Z-Score, Raw Score, Mean, & SD 4:09 
   Weird Z-Scores 9:40 
    Weird Z-Scores 9:41 
   Excel 16:45 
    For Normal Distributions 16:46 
    For Standard Normal Distributions 19:11 
    Excel Example 20:24 
   Types of Problems 25:18 
    Percentage Problem: P(x) 25:19 
    Raw Score and Z-Score Problems 26:28 
    Standard Deviation Problems 27:01 
   Shape Analogy 27:44 
    Shape Analogy 27:45 
   Example 1: Deaths Due to Heart Disease vs. Deaths Due to Cancer 28:24 
   Example 2: Heights of Male College Students 33:15 
   Example 3: Mean and Standard Deviation 37:14 
   Example 4: Finding Percentage of Values in a Standard Normal Distribution 37:49 
  Normal Distribution: PDF vs. CDF 55:44
   Intro 0:00 
   Roadmap 0:15 
    Roadmap 0:16 
   Frequency vs. Cumulative Frequency 0:56 
    Frequency vs. Cumulative Frequency 0:57 
   Frequency vs. Cumulative Frequency 4:32 
    Frequency vs. Cumulative Frequency Cont. 4:33 
   Calculus in Brief 6:21 
    Derivative-Integral Continuum 6:22 
   PDF 10:08 
    PDF for Standard Normal Distribution 10:09 
    PDF for Normal Distribution 14:32 
   Integral of PDF = CDF 21:27 
    Integral of PDF = CDF 21:28 
   Example 1: Cumulative Frequency Graph 23:31 
   Example 2: Mean, Standard Deviation, and Probability 24:43 
   Example 3: Mean and Standard Deviation 35:50 
   Example 4: Age of Cars 49:32 
V. Linear Regression
  Scatterplots 47:19
   Intro 0:00 
   Roadmap 0:04 
    Roadmap 0:05 
   Previous Visualizations 0:30 
    Frequency Distributions 0:31 
   Compare & Contrast 2:26 
    Frequency Distributions Vs. Scatterplots 2:27 
   Summary Values 4:53 
    Shape 4:54 
    Center & Trend 6:41 
    Spread & Strength 8:22 
    Univariate & Bivariate 10:25 
   Example Scatterplot 10:48 
    Shape, Trend, and Strength 10:49 
   Positive and Negative Association 14:05 
    Positive and Negative Association 14:06 
   Linearity, Strength, and Consistency 18:30 
    Linearity 18:31 
    Strength 19:14 
    Consistency 20:40 
   Summarizing a Scatterplot 22:58 
    Summarizing a Scatterplot 22:59 
   Example 1: Gapminder.org, Income x Life Expectancy 26:32 
   Example 2: Gapminder.org, Income x Infant Mortality 36:12 
   Example 3: Trend and Strength of Variables 40:14 
   Example 4: Trend, Strength and Shape for Scatterplots 43:27 
  Regression 32:02
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Linear Equations 0:34 
    Linear Equations: y = mx + b 0:35 
   Rough Line 5:16 
    Rough Line 5:17 
   Regression - A 'Center' Line 7:41 
    Reasons for Summarizing with a Regression Line 7:42 
    Predictor and Response Variable 10:04 
   Goal of Regression 12:29 
    Goal of Regression 12:30 
   Prediction 14:50 
    Example: Servings of Mile Per Year Shown By Age 14:51 
    Intrapolation 17:06 
    Extrapolation 17:58 
   Error in Prediction 20:34 
    Prediction Error 20:35 
    Residual 21:40 
   Example 1: Residual 23:34 
   Example 2: Large and Negative Residual 26:30 
   Example 3: Positive Residual 28:13 
   Example 4: Interpret Regression Line & Extrapolate 29:40 
  Least Squares Regression 56:36
   Intro 0:00 
   Roadmap 0:13 
    Roadmap 0:14 
   Best Fit 0:47 
    Best Fit 0:48 
   Sum of Squared Errors (SSE) 1:50 
    Sum of Squared Errors (SSE) 1:51 
   Why Squared? 3:38 
    Why Squared? 3:39 
   Quantitative Properties of Regression Line 4:51 
    Quantitative Properties of Regression Line 4:52 
   So How do we Find Such a Line? 6:49 
    SSEs of Different Line Equations & Lowest SSE 6:50 
    Carl Gauss' Method 8:01 
   How Do We Find Slope (b1) 11:00 
    How Do We Find Slope (b1) 11:01 
   Hoe Do We Find Intercept 15:11 
    Hoe Do We Find Intercept 15:12 
   Example 1: Which of These Equations Fit the Above Data Best? 17:18 
   Example 2: Find the Regression Line for These Data Points and Interpret It 26:31 
   Example 3: Summarize the Scatterplot and Find the Regression Line. 34:31 
   Example 4: Examine the Mean of Residuals 43:52 
  Correlation 43:58
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Summarizing a Scatterplot Quantitatively 0:47 
    Shape 0:48 
    Trend 1:11 
    Strength: Correlation ® 1:45 
   Correlation Coefficient ( r ) 2:30 
    Correlation Coefficient ( r ) 2:31 
   Trees vs. Forest 11:59 
    Trees vs. Forest 12:00 
   Calculating r 15:07 
    Average Product of z-scores for x and y 15:08 
   Relationship between Correlation and Slope 21:10 
    Relationship between Correlation and Slope 21:11 
   Example 1: Find the Correlation between Grams of Fat and Cost 24:11 
   Example 2: Relationship between r and b1 30:24 
   Example 3: Find the Regression Line 33:35 
   Example 4: Find the Correlation Coefficient for this Set of Data 37:37 
  Correlation: r vs. r-squared 52:52
   Intro 0:00 
   Roadmap 0:07 
    Roadmap 0:08 
   R-squared 0:44 
    What is the Meaning of It? Why Squared? 0:45 
   Parsing Sum of Squared (Parsing Variability) 2:25 
    SST = SSR + SSE 2:26 
   What is SST and SSE? 7:46 
    What is SST and SSE? 7:47 
   r-squared 18:33 
    Coefficient of Determination 18:34 
   If the Correlation is Strong… 20:25 
    If the Correlation is Strong… 20:26 
   If the Correlation is Weak… 22:36 
    If the Correlation is Weak… 22:37 
   Example 1: Find r-squared for this Set of Data 23:56 
   Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance? 33:54 
   Example 3: Why Does r-squared Only Range from 0 to 1 37:29 
   Example 4: Find the r-squared for This Set of Data 39:55 
  Transformations of Data 27:08
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Why Transform? 0:26 
    Why Transform? 0:27 
   Shape-preserving vs. Shape-changing Transformations 5:14 
    Shape-preserving = Linear Transformations 5:15 
    Shape-changing Transformations = Non-linear Transformations 6:20 
   Common Shape-Preserving Transformations 7:08 
    Common Shape-Preserving Transformations 7:09 
   Common Shape-Changing Transformations 8:59 
    Powers 9:00 
    Logarithms 9:39 
   Change Just One Variable? Both? 10:38 
    Log-log Transformations 10:39 
    Log Transformations 14:38 
   Example 1: Create, Graph, and Transform the Data Set 15:19 
   Example 2: Create, Graph, and Transform the Data Set 20:08 
   Example 3: What Kind of Model would You Choose for this Data? 22:44 
   Example 4: Transformation of Data 25:46 
VI. Collecting Data in an Experiment
  Sampling & Bias 54:44
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Descriptive vs. Inferential Statistics 1:04 
    Descriptive Statistics: Data Exploration 1:05 
    Example 2:03 
   To tackle Generalization… 4:31 
    Generalization 4:32 
    Sampling 6:06 
    'Good' Sample 6:40 
   Defining Samples and Populations 8:55 
    Population 8:56 
    Sample 11:16 
   Why Use Sampling? 13:09 
    Why Use Sampling? 13:10 
   Goal of Sampling: Avoiding Bias 15:04 
    What is Bias? 15:05 
    Where does Bias Come from: Sampling Bias 17:53 
    Where does Bias Come from: Response Bias 18:27 
   Sampling Bias: Bias from Bas Sampling Methods 19:34 
    Size Bias 19:35 
    Voluntary Response Bias 21:13 
    Convenience Sample 22:22 
    Judgment Sample 23:58 
    Inadequate Sample Frame 25:40 
   Response Bias: Bias from 'Bad' Data Collection Methods 28:00 
    Nonresponse Bias 29:31 
    Questionnaire Bias 31:10 
    Incorrect Response or Measurement Bias 37:32 
   Example 1: What Kind of Biases? 40:29 
   Example 2: What Biases Might Arise? 44:46 
   Example 3: What Kind of Biases? 48:34 
   Example 4: What Kind of Biases? 51:43 
  Sampling Methods 14:25
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Biased vs. Unbiased Sampling Methods 0:32 
    Biased Sampling 0:33 
    Unbiased Sampling 1:13 
   Probability Sampling Methods 2:31 
    Simple Random 2:54 
    Stratified Random Sampling 4:06 
    Cluster Sampling 5:24 
    Two-staged Sampling 6:22 
    Systematic Sampling 7:25 
   Example 1: Which Type(s) of Sampling was this? 8:33 
   Example 2: Describe How to Take a Two-Stage Sample from this Book 10:16 
   Example 3: Sampling Methods 11:58 
   Example 4: Cluster Sample Plan 12:48 
  Research Design 53:54
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Descriptive vs. Inferential Statistics 0:51 
    Descriptive Statistics: Data Exploration 0:52 
    Inferential Statistics 1:02 
   Variables and Relationships 1:44 
    Variables 1:45 
    Relationships 2:49 
   Not Every Type of Study is an Experiment… 4:16 
    Category I - Descriptive Study 4:54 
    Category II - Correlational Study 5:50 
    Category III - Experimental, Quasi-experimental, Non-experimental 6:33 
   Category III 7:42 
    Experimental, Quasi-experimental, and Non-experimental 7:43 
   Why CAN'T the Other Strategies Determine Causation? 10:18 
    Third-variable Problem 10:19 
    Directionality Problem 15:49 
   What Makes Experiments Special? 17:54 
    Manipulation 17:55 
    Control (and Comparison) 21:58 
   Methods of Control 26:38 
    Holding Constant 26:39 
    Matching 29:11 
    Random Assignment 31:48 
   Experiment Terminology 34:09 
    'true' Experiment vs. Study 34:10 
    Independent Variable (IV) 35:16 
    Dependent Variable (DV) 35:45 
    Factors 36:07 
    Treatment Conditions 36:23 
    Levels 37:43 
    Confounds or Extraneous Variables 38:04 
   Blind 38:38 
    Blind Experiments 38:39 
    Double-blind Experiments 39:29 
   How Categories Relate to Statistics 41:35 
    Category I - Descriptive Study 41:36 
    Category II - Correlational Study 42:05 
    Category III - Experimental, Quasi-experimental, Non-experimental 42:43 
   Example 1: Research Design 43:50 
   Example 2: Research Design 47:37 
   Example 3: Research Design 50:12 
   Example 4: Research Design 52:00 
  Between and Within Treatment Variability 41:31
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Experimental Designs 0:51 
    Experimental Designs: Manipulation & Control 0:52 
   Two Types of Variability 2:09 
    Between Treatment Variability 2:10 
    Within Treatment Variability 3:31 
   Updated Goal of Experimental Design 5:47 
    Updated Goal of Experimental Design 5:48 
   Example: Drugs and Driving 6:56 
    Example: Drugs and Driving 6:57 
   Different Types of Random Assignment 11:27 
    All Experiments 11:28 
    Completely Random Design 12:02 
    Randomized Block Design 13:19 
   Randomized Block Design 15:48 
    Matched Pairs Design 15:49 
    Repeated Measures Design 19:47 
   Between-subject Variable vs. Within-subject Variable 22:43 
    Completely Randomized Design 22:44 
    Repeated Measures Design 25:03 
   Example 1: Design a Completely Random, Matched Pair, and Repeated Measures Experiment 26:16 
   Example 2: Block Design 31:41 
   Example 3: Completely Randomized Designs 35:11 
   Example 4: Completely Random, Matched Pairs, or Repeated Measures Experiments? 39:01 
VII. Review of Probability Axioms
  Sample Spaces 37:52
   Intro 0:00 
   Roadmap 0:07 
    Roadmap 0:08 
   Why is Probability Involved in Statistics 0:48 
    Probability 0:49 
    Can People Tell the Difference between Cheap and Gourmet Coffee? 2:08 
   Taste Test with Coffee Drinkers 3:37 
    If No One can Actually Taste the Difference 3:38 
    If Everyone can Actually Taste the Difference 5:36 
   Creating a Probability Model 7:09 
    Creating a Probability Model 7:10 
   D'Alembert vs. Necker 9:41 
    D'Alembert vs. Necker 9:42 
   Problem with D'Alembert's Model 13:29 
    Problem with D'Alembert's Model 13:30 
   Covering Entire Sample Space 15:08 
    Fundamental Principle of Counting 15:09 
   Where Do Probabilities Come From? 22:54 
    Observed Data, Symmetry, and Subjective Estimates 22:55 
   Checking whether Model Matches Real World 24:27 
    Law of Large Numbers 24:28 
   Example 1: Law of Large Numbers 27:46 
   Example 2: Possible Outcomes 30:43 
   Example 3: Brands of Coffee and Taste 33:25 
   Example 4: How Many Different Treatments are there? 35:33 
  Addition Rule for Disjoint Events 20:29
   Intro 0:00 
   Roadmap 0:08 
    Roadmap 0:09 
   Disjoint Events 0:41 
    Disjoint Events 0:42 
   Meaning of 'or' 2:39 
    In Regular Life 2:40 
    In Math/Statistics/Computer Science 3:10 
   Addition Rule for Disjoin Events 3:55 
    If A and B are Disjoint: P (A and B) 3:56 
    If A and B are Disjoint: P (A or B) 5:15 
   General Addition Rule 5:41 
    General Addition Rule 5:42 
   Generalized Addition Rule 8:31 
    If A and B are not Disjoint: P (A or B) 8:32 
   Example 1: Which of These are Mutually Exclusive? 10:50 
   Example 2: What is the Probability that You will Have a Combination of One Heads and Two Tails? 12:57 
   Example 3: Engagement Party 15:17 
   Example 4: Home Owner's Insurance 18:30 
  Conditional Probability 57:19
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   'or' vs. 'and' vs. Conditional Probability 1:07 
    'or' vs. 'and' vs. Conditional Probability 1:08 
   'and' vs. Conditional Probability 5:57 
    P (M or L) 5:58 
    P (M and L) 8:41 
    P (M|L) 11:04 
    P (L|M) 12:24 
   Tree Diagram 15:02 
    Tree Diagram 15:03 
   Defining Conditional Probability 22:42 
    Defining Conditional Probability 22:43 
   Common Contexts for Conditional Probability 30:56 
    Medical Testing: Positive Predictive Value 30:57 
    Medical Testing: Sensitivity 33:03 
    Statistical Tests 34:27 
   Example 1: Drug and Disease 36:41 
   Example 2: Marbles and Conditional Probability 40:04 
   Example 3: Cards and Conditional Probability 45:59 
   Example 4: Votes and Conditional Probability 50:21 
  Independent Events 24:27
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Independent Events & Conditional Probability 0:26 
    Non-independent Events 0:27 
    Independent Events 2:00 
   Non-independent and Independent Events 3:08 
    Non-independent and Independent Events 3:09 
   Defining Independent Events 5:52 
    Defining Independent Events 5:53 
   Multiplication Rule 7:29 
    Previously… 7:30 
    But with Independent Evens 8:53 
   Example 1: Which of These Pairs of Events are Independent? 11:12 
   Example 2: Health Insurance and Probability 15:12 
   Example 3: Independent Events 17:42 
   Example 4: Independent Events 20:03 
VIII. Probability Distributions
  Introduction to Probability Distributions 56:45
   Intro 0:00 
   Roadmap 0:08 
    Roadmap 0:09 
   Sampling vs. Probability 0:57 
    Sampling 0:58 
    Missing 1:30 
    What is Missing? 3:06 
   Insight: Probability Distributions 5:26 
    Insight: Probability Distributions 5:27 
    What is a Probability Distribution? 7:29 
   From Sample Spaces to Probability Distributions 8:44 
    Sample Space 8:45 
    Probability Distribution of the Sum of Two Die 11:16 
   The Random Variable 17:43 
    The Random Variable 17:44 
   Expected Value 21:52 
    Expected Value 21:53 
   Example 1: Probability Distributions 28:45 
   Example 2: Probability Distributions 35:30 
   Example 3: Probability Distributions 43:37 
   Example 4: Probability Distributions 47:20 
  Expected Value & Variance of Probability Distributions 53:41
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Discrete vs. Continuous Random Variables 1:04 
    Discrete vs. Continuous Random Variables 1:05 
   Mean and Variance Review 4:44 
    Mean: Sample, Population, and Probability Distribution 4:45 
    Variance: Sample, Population, and Probability Distribution 9:12 
   Example Situation 14:10 
    Example Situation 14:11 
   Some Special Cases… 16:13 
    Some Special Cases… 16:14 
   Linear Transformations 19:22 
    Linear Transformations 19:23 
    What Happens to Mean and Variance of the Probability Distribution? 20:12 
   n Independent Values of X 25:38 
    n Independent Values of X 25:39 
   Compare These Two Situations 30:56 
    Compare These Two Situations 30:57 
   Two Random Variables, X and Y 32:02 
    Two Random Variables, X and Y 32:03 
   Example 1: Expected Value & Variance of Probability Distributions 35:35 
   Example 2: Expected Values & Standard Deviation 44:17 
   Example 3: Expected Winnings and Standard Deviation 48:18 
  Binomial Distribution 55:15
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Discrete Probability Distributions 1:42 
    Discrete Probability Distributions 1:43 
   Binomial Distribution 2:36 
    Binomial Distribution 2:37 
   Multiplicative Rule Review 6:54 
    Multiplicative Rule Review 6:55 
   How Many Outcomes with k 'Successes' 10:23 
    Adults and Bachelor's Degree: Manual List of Outcomes 10:24 
   P (X=k) 19:37 
    Putting Together # of Outcomes with the Multiplicative Rule 19:38 
   Expected Value and Standard Deviation in a Binomial Distribution 25:22 
    Expected Value and Standard Deviation in a Binomial Distribution 25:23 
   Example 1: Coin Toss 33:42 
   Example 2: College Graduates 38:03 
   Example 3: Types of Blood and Probability 45:39 
   Example 4: Expected Number and Standard Deviation 51:11 
IX. Sampling Distributions of Statistics
  Introduction to Sampling Distributions 48:17
   Intro 0:00 
   Roadmap 0:08 
    Roadmap 0:09 
   Probability Distributions vs. Sampling Distributions 0:55 
    Probability Distributions vs. Sampling Distributions 0:56 
   Same Logic 3:55 
    Logic of Probability Distribution 3:56 
    Example: Rolling Two Die 6:56 
   Simulating Samples 9:53 
    To Come Up with Probability Distributions 9:54 
    In Sampling Distributions 11:12 
   Connecting Sampling and Research Methods with Sampling Distributions 12:11 
    Connecting Sampling and Research Methods with Sampling Distributions 12:12 
   Simulating a Sampling Distribution 14:14 
    Experimental Design: Regular Sleep vs. Less Sleep 14:15 
   Logic of Sampling Distributions 23:08 
    Logic of Sampling Distributions 23:09 
   General Method of Simulating Sampling Distributions 25:38 
    General Method of Simulating Sampling Distributions 25:39 
   Questions that Remain 28:45 
    Questions that Remain 28:46 
   Example 1: Mean and Standard Error of Sampling Distribution 30:57 
   Example 2: What is the Best Way to Describe Sampling Distributions? 37:12 
   Example 3: Matching Sampling Distributions 38:21 
   Example 4: Mean and Standard Error of Sampling Distribution 41:51 
  Sampling Distribution of the Mean 1:08:48
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Special Case of General Method for Simulating a Sampling Distribution 1:53 
    Special Case of General Method for Simulating a Sampling Distribution 1:54 
    Computer Simulation 3:43 
   Using Simulations to See Principles behind Shape of SDoM 15:50 
    Using Simulations to See Principles behind Shape of SDoM 15:51 
    Conditions 17:38 
   Using Simulations to See Principles behind Center (Mean) of SDoM 20:15 
    Using Simulations to See Principles behind Center (Mean) of SDoM 20:16 
    Conditions: Does n Matter? 21:31 
    Conditions: Does Number of Simulation Matter? 24:37 
   Using Simulations to See Principles behind Standard Deviation of SDoM 27:13 
    Using Simulations to See Principles behind Standard Deviation of SDoM 27:14 
    Conditions: Does n Matter? 34:45 
    Conditions: Does Number of Simulation Matter? 36:24 
   Central Limit Theorem 37:13 
    SHAPE 38:08 
    CENTER 39:34 
    SPREAD 39:52 
   Comparing Population, Sample, and SDoM 43:10 
    Comparing Population, Sample, and SDoM 43:11 
   Answering the 'Questions that Remain' 48:24 
    What Happens When We Don't Know What the Population Looks Like? 48:25 
    Can We Have Sampling Distributions for Summary Statistics Other than the Mean? 49:42 
    How Do We Know whether a Sample is Sufficiently Unlikely? 53:36 
    Do We Always Have to Simulate a Large Number of Samples in Order to get a Sampling Distribution? 54:40 
   Example 1: Mean Batting Average 55:25 
   Example 2: Mean Sampling Distribution and Standard Error 59:07 
   Example 3: Sampling Distribution of the Mean 61:04 
  Sampling Distribution of Sample Proportions 54:37
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Intro to Sampling Distribution of Sample Proportions (SDoSP) 0:51 
    Categorical Data (Examples) 0:52 
    Wish to Estimate Proportion of Population from Sample… 2:00 
   Notation 3:34 
    Population Proportion and Sample Proportion Notations 3:35 
   What's the Difference? 9:19 
    SDoM vs. SDoSP: Type of Data 9:20 
    SDoM vs. SDoSP: Shape 11:24 
    SDoM vs. SDoSP: Center 12:30 
    SDoM vs. SDoSP: Spread 15:34 
   Binomial Distribution vs. Sampling Distribution of Sample Proportions 19:14 
    Binomial Distribution vs. SDoSP: Type of Data 19:17 
    Binomial Distribution vs. SDoSP: Shape 21:07 
    Binomial Distribution vs. SDoSP: Center 21:43 
    Binomial Distribution vs. SDoSP: Spread 24:08 
   Example 1: Sampling Distribution of Sample Proportions 26:07 
   Example 2: Sampling Distribution of Sample Proportions 37:58 
   Example 3: Sampling Distribution of Sample Proportions 44:42 
   Example 4: Sampling Distribution of Sample Proportions 45:57 
X. Inferential Statistics
  Introduction to Confidence Intervals 42:53
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Inferential Statistics 0:50 
    Inferential Statistics 0:51 
   Two Problems with This Picture… 3:20 
    Two Problems with This Picture… 3:21 
    Solution: Confidence Intervals (CI) 4:59 
    Solution: Hypotheiss Testing (HT) 5:49 
   Which Parameters are Known? 6:45 
    Which Parameters are Known? 6:46 
   Confidence Interval - Goal 7:56 
    When We Don't Know m but know s 7:57 
   When We Don't Know 18:27 
    When We Don't Know m nor s 18:28 
   Example 1: Confidence Intervals 26:18 
   Example 2: Confidence Intervals 29:46 
   Example 3: Confidence Intervals 32:18 
   Example 4: Confidence Intervals 38:31 
  t Distributions 1:02:06
   Intro 0:00 
   Roadmap 0:04 
    Roadmap 0:05 
   When to Use z vs. t? 1:07 
    When to Use z vs. t? 1:08 
   What is z and t? 3:02 
     z-score and t-score: Commonality 3:03 
    z-score and t-score: Formulas 3:34 
    z-score and t-score: Difference 5:22 
   Why not z? (Why t?) 7:24 
    Why not z? (Why t?) 7:25 
   But Don't Worry! 15:13 
    Gossett and t-distributions 15:14 
   Rules of t Distributions 17:05 
    t-distributions are More Normal as n Gets Bigger 17:06 
    t-distributions are a Family of Distributions 18:55 
   Degrees of Freedom (df) 20:02 
    Degrees of Freedom (df) 20:03 
   t Family of Distributions 24:07 
    t Family of Distributions : df = 2 , 4, and 60 24:08 
    df = 60 29:16 
    df = 2 29:59 
   How to Find It? 31:01 
    'Student's t-distribution' or 't-distribution' 31:02 
    Excel Example 33:06 
   Example 1: Which Distribution Do You Use? Z or t? 45:26 
   Example 2: Friends on Facebook 47:41 
   Example 3: t Distributions 52:15 
   Example 4: t Distributions , confidence interval, and mean 55:59 
  Introduction to Hypothesis Testing 1:06:33
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   Issues to Overcome in Inferential Statistics 1:35 
    Issues to Overcome in Inferential Statistics 1:36 
    What Happens When We Don't Know What the Population Looks Like? 2:57 
    How Do We Know whether a sample is Sufficiently Unlikely 3:43 
   Hypothesizing a Population 6:44 
    Hypothesizing a Population 6:45 
    Null Hypothesis 8:07 
    Alternative Hypothesis 8:56 
   Hypotheses 11:58 
    Hypotheses 11:59 
   Errors in Hypothesis Testing 14:22 
    Errors in Hypothesis Testing 14:23 
   Steps of Hypothesis Testing 21:15 
    Steps of Hypothesis Testing 21:16 
   Single Sample HT ( When Sigma Available) 26:08 
    Example: Average Facebook Friends 26:09 
    Step1 27:08 
    Step 2 27:58 
    Step 3 28:17 
    Step 4 32:18 
   Single Sample HT (When Sigma Not Available) 36:33 
    Example: Average Facebook Friends 36:34 
    Step1: Hypothesis Testing 36:58 
    Step 2: Significance Level 37:25 
    Step 3: Decision Stage 37:40 
    Step 4: Sample 41:36 
   Sigma and p-value 45:04 
    Sigma and p-value 45:05 
    On tailed vs. Two Tailed Hypotheses 45:51 
   Example 1: Hypothesis Testing 48:37 
   Example 2: Heights of Women in the US 57:43 
   Example 3: Select the Best Way to Complete This Sentence 63:23 
  Confidence Intervals for the Difference of Two Independent Means 55:14
   Intro 0:00 
   Roadmap 0:14 
    Roadmap 0:15 
   One Mean vs. Two Means 1:17 
    One Mean vs. Two Means 1:18 
   Notation 2:41 
    A Sample! A Set! 2:42 
    Mean of X, Mean of Y, and Difference of Two Means 3:56 
    SE of X 4:34 
    SE of Y 6:28 
   Sampling Distribution of the Difference between Two Means (SDoD) 7:48 
    Sampling Distribution of the Difference between Two Means (SDoD) 7:49 
   Rules of the SDoD (similar to CLT!) 15:00 
    Mean for the SDoD Null Hypothesis 15:01 
    Standard Error 17:39 
   When can We Construct a CI for the Difference between Two Means? 21:28 
    Three Conditions 21:29 
   Finding CI 23:56 
    One Mean CI 23:57 
    Two Means CI 25:45 
   Finding t 29:16 
    Finding t 29:17 
   Interpreting CI 30:25 
    Interpreting CI 30:26 
   Better Estimate of s (s pool) 34:15 
    Better Estimate of s (s pool) 34:16 
   Example 1: Confidence Intervals 42:32 
   Example 2: SE of the Difference 52:36 
  Hypothesis Testing for the Difference of Two Independent Means 50:00
   Intro 0:00 
   Roadmap 0:06 
    Roadmap 0:07 
   The Goal of Hypothesis Testing 0:56 
    One Sample and Two Samples 0:57 
   Sampling Distribution of the Difference between Two Means (SDoD) 3:42 
    Sampling Distribution of the Difference between Two Means (SDoD) 3:43 
   Rules of the SDoD (Similar to CLT!) 6:46 
    Shape 6:47 
    Mean for the Null Hypothesis 7:26 
    Standard Error for Independent Samples (When Variance is Homogenous) 8:18 
    Standard Error for Independent Samples (When Variance is not Homogenous) 9:25 
   Same Conditions for HT as for CI 10:08 
    Three Conditions 10:09 
   Steps of Hypothesis Testing 11:04 
    Steps of Hypothesis Testing 11:05 
   Formulas that Go with Steps of Hypothesis Testing 13:21 
    Step 1 13:25 
    Step 2 14:18 
    Step 3 15:00 
    Step 4 16:57 
   Example 1: Hypothesis Testing for the Difference of Two Independent Means 18:47 
   Example 2: Hypothesis Testing for the Difference of Two Independent Means 33:55 
   Example 3: Hypothesis Testing for the Difference of Two Independent Means 44:22 
  Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 1:14:11
   Intro 0:00 
   Roadmap 0:09 
    Roadmap 0:10 
   The Goal of Hypothesis Testing 1:27 
    One Sample and Two Samples 1:28 
   Independent Samples vs. Paired Samples 3:16 
    Independent Samples vs. Paired Samples 3:17 
    Which is Which? 5:20 
   Independent SAMPLES vs. Independent VARIABLES 7:43 
    independent SAMPLES vs. Independent VARIABLES 7:44 
   T-tests Always… 10:48 
    T-tests Always… 10:49 
   Notation for Paired Samples 12:59 
    Notation for Paired Samples 13:00 
   Steps of Hypothesis Testing for Paired Samples 16:13 
    Steps of Hypothesis Testing for Paired Samples 16:14 
   Rules of the SDoD (Adding on Paired Samples) 18:03 
    Shape 18:04 
    Mean for the Null Hypothesis 18:31 
    Standard Error for Independent Samples (When Variance is Homogenous) 19:25 
    Standard Error for Paired Samples 20:39 
   Formulas that go with Steps of Hypothesis Testing 22:59 
    Formulas that go with Steps of Hypothesis Testing 23:00 
   Confidence Intervals for Paired Samples 30:32 
    Confidence Intervals for Paired Samples 30:33 
   Example 1: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 32:28 
   Example 2: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 44:02 
   Example 3: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 52:23 
  Type I and Type II Errors 31:27
   Intro 0:00 
   Roadmap 0:18 
    Roadmap 0:19 
   Errors and Relationship to HT and the Sample Statistic? 1:11 
    Errors and Relationship to HT and the Sample Statistic? 1:12 
   Instead of a Box…Distributions! 7:00 
    One Sample t-test: Friends on Facebook 7:01 
    Two Sample t-test: Friends on Facebook 13:46 
   Usually, Lots of Overlap between Null and Alternative Distributions 16:59 
    Overlap between Null and Alternative Distributions 17:00 
   How Distributions and 'Box' Fit Together 22:45 
    How Distributions and 'Box' Fit Together 22:46 
   Example 1: Types of Errors 25:54 
   Example 2: Types of Errors 27:30 
   Example 3: What is the Danger of the Type I Error? 29:38 
  Effect Size & Power 44:41
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Distance between Distributions: Sample t 0:49 
    Distance between Distributions: Sample t 0:50 
   Problem with Distance in Terms of Standard Error 2:56 
    Problem with Distance in Terms of Standard Error 2:57 
   Test Statistic (t) vs. Effect Size (d or g) 4:38 
    Test Statistic (t) vs. Effect Size (d or g) 4:39 
   Rules of Effect Size 6:09 
    Rules of Effect Size 6:10 
   Why Do We Need Effect Size? 8:21 
    Tells You the Practical Significance 8:22 
    HT can be Deceiving… 10:25 
    Important Note 10:42 
   What is Power? 11:20 
    What is Power? 11:21 
   Why Do We Need Power? 14:19 
    Conditional Probability and Power 14:20 
    Power is: 16:27 
   Can We Calculate Power? 19:00 
    Can We Calculate Power? 19:01 
   How Does Alpha Affect Power? 20:36 
    How Does Alpha Affect Power? 20:37 
   How Does Effect Size Affect Power? 25:38 
    How Does Effect Size Affect Power? 25:39 
   How Does Variability and Sample Size Affect Power? 27:56 
    How Does Variability and Sample Size Affect Power? 27:57 
   How Do We Increase Power? 32:47 
    Increasing Power 32:48 
   Example 1: Effect Size & Power 35:40 
   Example 2: Effect Size & Power 37:38 
   Example 3: Effect Size & Power 40:55 
XI. Analysis of Variance
  F-distributions 24:46
   Intro 0:00 
   Roadmap 0:04 
    Roadmap 0:05 
   Z- & T-statistic and Their Distribution 0:34 
    Z- & T-statistic and Their Distribution 0:35 
   F-statistic 4:55 
    The F Ration ( the Variance Ratio) 4:56 
   F-distribution 12:29 
    F-distribution 12:30 
   s and p-value 15:00 
    s and p-value 15:01 
   Example 1: Why Does F-distribution Stop At 0 But Go On Until Infinity? 18:33 
   Example 2: F-distributions 19:29 
   Example 3: F-distributions and Heights 21:29 
  ANOVA with Independent Samples 1:09:25
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   The Limitations of t-tests 1:12 
    The Limitations of t-tests 1:13 
   Two Major Limitations of Many t-tests 3:26 
    Two Major Limitations of Many t-tests 3:27 
   Ronald Fisher's Solution… F-test! New Null Hypothesis 4:43 
    Ronald Fisher's Solution… F-test! New Null Hypothesis (Omnibus Test - One Test to Rule Them All!) 4:44 
   Analysis of Variance (ANoVA) Notation 7:47 
    Analysis of Variance (ANoVA) Notation 7:48 
   Partitioning (Analyzing) Variance 9:58 
    Total Variance 9:59 
    Within-group Variation 14:00 
    Between-group Variation 16:22 
   Time out: Review Variance & SS 17:05 
    Time out: Review Variance & SS 17:06 
   F-statistic 19:22 
    The F Ratio (the Variance Ratio) 19:23 
   S²bet = SSbet / dfbet 22:13 
    What is This? 22:14 
    How Many Means? 23:20 
    So What is the dfbet? 23:38 
    So What is SSbet? 24:15 
   S²w = SSw / dfw 26:05 
    What is This? 26:06 
    How Many Means? 27:20 
    So What is the dfw? 27:36 
    So What is SSw? 28:18 
   Chart of Independent Samples ANOVA 29:25 
    Chart of Independent Samples ANOVA 29:26 
   Example 1: Who Uploads More Photos: Unknown Ethnicity, Latino, Asian, Black, or White Facebook Users? 35:52 
    Hypotheses 35:53 
    Significance Level 39:40 
    Decision Stage 40:05 
    Calculate Samples' Statistic and p-Value 44:10 
    Reject or Fail to Reject H0 55:54 
   Example 2: ANOVA with Independent Samples 58:21 
  Repeated Measures ANOVA 1:15:13
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   The Limitations of t-tests 0:36 
    Who Uploads more Pictures and Which Photo-Type is Most Frequently Used on Facebook? 0:37 
   ANOVA (F-test) to the Rescue! 5:49 
    Omnibus Hypothesis 5:50 
    Analyze Variance 7:27 
   Independent Samples vs. Repeated Measures 9:12 
    Same Start 9:13 
    Independent Samples ANOVA 10:43 
    Repeated Measures ANOVA 12:00 
   Independent Samples ANOVA 16:00 
    Same Start: All the Variance Around Grand Mean 16:01 
    Independent Samples 16:23 
   Repeated Measures ANOVA 18:18 
    Same Start: All the Variance Around Grand Mean 18:19 
    Repeated Measures 18:33 
   Repeated Measures F-statistic 21:22 
    The F Ratio (The Variance Ratio) 21:23 
   S²bet = SSbet / dfbet 23:07 
    What is This? 23:08 
    How Many Means? 23:39 
    So What is the dfbet? 23:54 
    So What is SSbet? 24:32 
   S² resid = SS resid / df resid 25:46 
    What is This? 25:47 
    So What is SS resid? 26:44 
    So What is the df resid? 27:36 
   SS subj and df subj 28:11 
    What is This? 28:12 
    How Many Subject Means? 29:43 
    So What is df subj? 30:01 
    So What is SS subj? 30:09 
   SS total and df total 31:42 
    What is This? 31:43 
    What is the Total Number of Data Points? 32:02 
    So What is df total? 32:34 
    so What is SS total? 32:47 
   Chart of Repeated Measures ANOVA 33:19 
    Chart of Repeated Measures ANOVA: F and Between-samples Variability 33:20 
    Chart of Repeated Measures ANOVA: Total Variability, Within-subject (case) Variability, Residual Variability 35:50 
   Example 1: Which is More Prevalent on Facebook: Tagged, Uploaded, Mobile, or Profile Photos? 40:25 
    Hypotheses 40:26 
    Significance Level 41:46 
    Decision Stage 42:09 
    Calculate Samples' Statistic and p-Value 46:18 
    Reject or Fail to Reject H0 57:55 
   Example 2: Repeated Measures ANOVA 58:57 
   Example 3: What's the Problem with a Bunch of Tiny t-tests? 73:59 
XII. Chi-square Test
  Chi-Square Goodness-of-Fit Test 58:23
   Intro 0:00 
   Roadmap 0:05 
    Roadmap 0:06 
   Where Does the Chi-Square Test Belong? 0:50 
    Where Does the Chi-Square Test Belong? 0:51 
   A New Twist on HT: Goodness-of-Fit 7:23 
    HT in General 7:24 
    Goodness-of-Fit HT 8:26 
   Hypotheses about Proportions 12:17 
    Null Hypothesis 12:18 
    Alternative Hypothesis 13:23 
    Example 14:38 
   Chi-Square Statistic 17:52 
    Chi-Square Statistic 17:53 
   Chi-Square Distributions 24:31 
    Chi-Square Distributions 24:32 
   Conditions for Chi-Square 28:58 
    Condition 1 28:59 
    Condition 2 30:20 
    Condition 3 30:32 
    Condition 4 31:47 
   Example 1: Chi-Square Goodness-of-Fit Test 32:23 
   Example 2: Chi-Square Goodness-of-Fit Test 44:34 
   Example 3: Which of These Statements Describe Properties of the Chi-Square Goodness-of-Fit Test? 56:06 
  Chi-Square Test of Homogeneity 51:36
   Intro 0:00 
   Roadmap 0:09 
    Roadmap 0:10 
   Goodness-of-Fit vs. Homogeneity 1:13 
    Goodness-of-Fit HT 1:14 
    Homogeneity 2:00 
    Analogy 2:38 
   Hypotheses About Proportions 5:00 
    Null Hypothesis 5:01 
    Alternative Hypothesis 6:11 
    Example 6:33 
   Chi-Square Statistic 10:12 
    Same as Goodness-of-Fit Test 10:13 
   Set Up Data 12:28 
    Setting Up Data Example 12:29 
   Expected Frequency 16:53 
    Expected Frequency 16:54 
   Chi-Square Distributions & df 19:26 
    Chi-Square Distributions & df 19:27 
   Conditions for Test of Homogeneity 20:54 
    Condition 1 20:55 
    Condition 2 21:39 
    Condition 3 22:05 
    Condition 4 22:23 
   Example 1: Chi-Square Test of Homogeneity 22:52 
   Example 2: Chi-Square Test of Homogeneity 32:10 
XIII. Overview of Statistics
  Overview of Statistics 18:11
   Intro 0:00 
   Roadmap 0:07 
    Roadmap 0:08 
   The Statistical Tests (HT) We've Covered 0:28 
    The Statistical Tests (HT) We've Covered 0:29 
   Organizing the Tests We've Covered… 1:08 
    One Sample: Continuous DV and Categorical DV 1:09 
    Two Samples: Continuous DV and Categorical DV 5:41 
    More Than Two Samples: Continuous DV and Categorical DV 8:21 
   The Following Data: OK Cupid 10:10 
    The Following Data: OK Cupid 10:11 
   Example 1: Weird-MySpace-Angle Profile Photo 10:38 
   Example 2: Geniuses 12:30 
   Example 3: Promiscuous iPhone Users 13:37 
   Example 4: Women, Aging, and Messaging 16:07 

Hi welcome to the first lesson in www.educator.com statistics course.0000

Today we are going to talk about descriptive statistics versus inferential statistics.0005

Here is the road map for today, first we need to distinguish how statistics is different from other mathematics.0012

We will talk about how descriptive and inferential statistics separate.0018

Finally we are going to talk about populations versus samples and then we are going to put all of those ideas together 0024

and look at how population, samples, descriptive, and inferential statistics all fit together.0030

First things first, how is statistics different from other specializations in mathematics such as trigonometry, geometry, calculus, linear algebra.0037

Statistics is different because it is the science of classifying, organizing, and interpreting or analyzing data.0048

You might be thinking to yourself - "Hey science? I thought this was mathematics." Right?0055

Its link implies much of science and because of that it is important in mathematics.0063

Let me explain that link to you in just one second.0069

First I want to step back and think about high school science firmament. 0073

A lot of high school science is concerned with measurement, we go around measuring things and measuring how fast people run 0077

and how fast things are dropped and how much things grow and how much things way.0084

How big things are and we are gathering a lot of data on measurement.0089

Then we find patterns within those measurements and that is basically the fundamentals behind high school science.0095

Those patterns can often be described as mathematical formulas.0104

I do not know if you have this experience that some of you may have had the experience of trying to derive the gravitational constant.0110

To some of you this equation might look familiar, D= ½ gt2.0117

(D) stands for distance, (g) stands for the gravitational constant and (t) stands for time.0126

Some of you may have had the experience of dropping things off a building and timing them 0138

and putting in these numbers to try and figure out what (g) is.0143

(g) theoretically is supposed to be 9.8 m/sec2.0149

But rarely do you calculate exactly 9.8 when you put in distance and time into this equation.0159

Often, science students think I'm terrible at science, I’m not getting the right answer 0167

but it is because all of these measurements are inherently a little bit sloppy.0173

Granted that high school students might be sloppier scientists than other scientists but in actuality all science experiments 0178

have measurement error and there is variance that comes with measurement.0186

There is always a little bit of jiggle in that data and often we do not pinpoint the exact right data even when you look at something 0191

like measuring someone's height, you might have 10 people measure the same person's height and come up with slightly different answers.0199

It is not because they are trying to cheat but that person might that a deep breath or slouch a little bit 0207

or maybe they read the tape measure at their hairline instead at their actual height. 0213

There are always different reasons for measurement error.0222

All science is fought with measurement error.0225

While because all experiments, even the good ones at SERV, MIT and Caltech, all experiments will have a little bit sloppiness.0230

That is because we are dealing with measuring the physical world.0242

It is not bad which we are looking at terrible scientist or just real messy 0250

it is just that inherently in measuring the world we are going to have a little bit of sloppiness.0256

Now because of that sloppiness, even the best experiment will produce a scatter of numbers.0262

Even best experiment as well as the worst experiments they will produce a scatter of values or measurements.0269

That is where the problem is right?0289

You will not get just one number like nice 9.8 gravitational constant, you will instead get this scatter of numbers.0290

How do we deal with that scatter and that is where statistics come in.0299

Statistics is the math of distributions then you could see how the math part and the science part fit together.0305

Statistics is invented because we want to do better in science.0311

We even have a special name for the scatter of measurements and that is called a distribution.0317

Not only that but we are going to look and see how we can go from frequencies of these values 0330

in order to get probability distributions of these values.0337

Those are also going to be called probability distributions.0341

One thing that should come to your mind is that when you have a scatter of values or a whole bunch of different probabilities 0360

predicting different values then you are not going to have just one number, you are going to have a whole set of numbers.0366

Because of that we are going to have to deal with the mathematics a little bit differently.0373

We are not just computing one number at a time and looking at one number and adding things to it, subtracting things to it, doing things to it.0378

Instead we are looking at entire distributions.0385

How do we treat these distributions?0389

How do we interpret them?0390

That is the question behind statistics.0392

You might think working with whole distributions that sounds problematic.0395

Sometimes it might seem like it.0400

It might seem like these equations are pretty complicated because we have to deal with the whole distribution.0403

Also you will get some great stuff out of working with distributions.0408

One reason is because distributions are often much more predictable than individual values.0412

Distributions are more predictable than individual values.0419

Models of distributions or theories of distributions can often predict the mathematical nature of randomness.0435

Is it not great?0444

They are predicting randomness.0445

That is what statistics is a little bit about, it is dealing with that randomness and teaming it.0448

How is statistics different from other specializations in mathematics?0456

It is born out of the science of classifying, organizing, and interpreting data, distributions of data to be more precise.0460

And because of that statistics is the mathematics of distributions.0469

Statistics is fundamental in all science in both natural and social sciences.0474

I’m a social science professor, a psychology professor by trade but even in the natural sciences all these discoveries that you have heard of 0480

they only come about through rigorous applications of statistics in physics, biology, economics, psychology,0490

you name it statistics have left its math there.0497

There are two skills that you need to know when to enter into statistics.0502

The first is the skill of data description or what you can think of that as exploration.0506

Often you could think of it as just an open-ended examination of the data.0512

Let us look and see what is there.0516

We are looking for patterns and often it is helpful to make a graph or to look at averages 0518

and standard deviations that are called summary values when you are looking for patterns.0524

These are tools that help us see patterns better.0535

The problem with just exploring or describing data is that you are not able to come to any conclusions.0540

You have to rain yourself from making conclusions when you are just doing descriptive statistics that is inferential statistics will come in.0548

When you make inferences in statistics you are doing a much more strict examination of the data according to set rules.0557

Then you will judge whether these patterns that you find through description are likely or not according to theories 0566

and different models that you may have set up.0575

At the end of inferential statistics you should be able to make measured conclusions.0579

Often in science we do not say statistics has proven this theory or completely disproven this theory.0585

Instead we make much more measured and qualified conclusions.0593

Those skills of description and inference applied directly to descriptive statistics and inferential statistics.0601

This thing that is different now is you want to think about those skills and how they apply to distributions.0611

Here is how descriptive statistics applies to distributions.0619

These are the concepts and tools that you need in order to analyze sample distributions.0624

Use to describe or explore sample distributions.0637

We just have taken the same concepts of what describing data means and we have applied it to sample distributions.0653

Distributions that we have plucked out and a set of data that we plucked out.0660

In inferential statistics what we need to do is then apply inference to distribution.0666

Here it is the concepts and tools to reason from sample distribution.0674

To make some inference to reason from a sample distribution to a larger population distribution.0694

In inferential statistics what we are doing is using those skills of inference to go from sample distributions 0715

but not only just to understand the sample but to make some inferences about a greater larger population.0721

Just to go beyond our actual data.0728

In descriptive statistics we just stay with our sample.0731

We do not make any inferences beyond what we have.0735

It behooves us to figure out what is the difference between the population and the sample distribution?0743

Here it might be helpful to just think of the population a sort of like the truth.0751

This is where we are interested in.0756

Is it the truth? This is the truth.0759

This is the thing that we want to get at.0765

If you think about the gravitational constant, this is that magical value that is out there in the world.0767

The sample is not the truth, it is like a little bit of that truth.0775

When we drop our objects from the top of the building and measure how fast they come down, we are getting samples.0781

From those samples we are trying to get at the truth.0791

The sample is not the whole truth but the sample does provide a window to the truth.0794

It is important to realize that the sample is not the actual truth itself.0803

This is not what we want to know about.0808

We want to know about the population but we are using the sample in order to know about the population.0812

Some pros and cons.0819

Some pros of the population is this because it is the truth if you happen to have all the information 0822

about the real population it will be absolutely 100% accurate.0828

However here is the con, it is almost impossible to get.0836

It is almost impossible to get the truth, the real population true.0847

For instance let us say you just want to know what the real average height of every person in the United States is.0853

In order to do that you would have to get measurements from every single person in the United States.0861

All of those measurements would have to be 100% accurate.0868

Let us say I will give that to you, you will even do that.0872

By the time you are finish recording all of those measurements, some people would have died and new people will have been born.0874

All of a sudden your measurements would not be accurate anymore.0881

It is almost impossible to get the entire population.0885

Often in statistics, they will pick a small population like they will say consider all the people who attend your school 0890

and to shrink down the population that you could think about it without feeling like your mind is being blown.0897

In the real world it is basically impossible to get the real truth.0905

On the other hand, the sample has the pro of being convenient.0910

It is easy to get data from just a sample of the population. 0917

You do not have to get the whole population, you just have to get a sample of it and it is convenient and easy to get. 0923

Here is the big con that you need to worry about.0929

The con is that the sample might be what is called biased.0933

By biased they do not necessarily mean like the sample like racists or prejudiced in some way, 0938

I just mean that the sample may not be representative of the population.0944

The problem with that is when we look at our sample we are going to use our sample to try to get on the truth.0960

If our sample is different from the truth then it might lead us astray and that is called being biased.0965

When we describe the population in terms of numbers and we get some summary values for the population, 0975

those descriptive values are going to be called parameters.0982

A friend of mine who teaches statistics with a help of the population parameter.0988

On the other hand, for samples you would use what is called statistics.0996

This word for statistics is the same word as the word for the class.1006

But statistics covers all of statistics, descriptive, inferential, population, sample, all that stuff.1010

This is the sort of smaller use of that word.1018

Population and parameter, specific sample for statistics.1024

Now let us put all those ideas together.1033

How do we put together descriptive and inferential statistics with populations and samples?1036

It helps us to ground ourselves by starting off with the idea that what we are interested in, in knowing about is the entire population.1042

We want to know about the real population.1052

Let us deal with one population at a time for now.1056

Often we do not have the population's entire data in front of us, we only have a sample of that data. 1060

This is our wish to go from sample to the population but remember the sample can be biased, that is problematic.1069

Here is where statistics comes in.1080

From samples we compute statistics and from populations we could know the parameters.1083

But we often do not have this link either because we do not know anything about the actual population.1097

Here is where we are, what inferential statistics will help us do is make this link.1106

How do we go from statistics of the sample to population parameters?1114

This jump, this inferential jump is going to be made through inferential statistics.1119

However in order to go from the sample to statistics we will use descriptive statistics.1134

This is how it all fits together.1147

Let us try some examples. 1150

Here is example 1, a pollster asks a group of voters how they intend to vote in the upcoming election for governor.1153

In this example is the individual pollster primarily using descriptive statistics or inferential statistics.1161

What he or she computes parameters or samples.1171

Here the pollster is just asking a group of voters how they intend to vote.1175

A poll is often just a sample of the entire set of voters so I would say the pollster is probably going to compute some sample statistics.1180

We should say statistics not samples.1194

I would say the pollster is probably calculating statistics.1202

If the pollster just got an answer such as this sample of voters is going to vote for the governor 75% of them are going to vote for the governor 1208

and only 25% are not that would be counted as descriptive statistics.1219

Once this pollster actually uses that information to then make some inferences and predicts and then I predict the governor will win, 1225

that would be inferential statistics.1236

But so far, it does not say that.1238

It seems that only descriptive statistics is being used here.1242

Example 2, a teacher organizes his classes test grades into distribution from best to worst and compares it to the test grades of the entire school.1248

In this example is the individual primarily using descriptive statistics or inferential statistics.1259

First he is definitely using descriptive statistics in order to organize his classes data.1265

He is using this but then he is comparing it to the test grades that the entire school.1273

He is getting his sample, his class and looking at how they are relative to the entire school.1279

That leap is going to be inferential statistics.1290

I would say he is using both descriptive and inferential.1294

A statistician is interested in the choices of majors of this year’s entering freshmen at a university 10% of randomly sampled.1302

What is the population? what is the sample? What is the parameter? What is the statistic?1311

The population seems to be all freshmen at the University, right? but the sample is this 10%.1317

That is the population and the sample so what is the parameter?1337

The parameter is what are the real major choices of all the students.1342

Maybe he will look at it as you know maybe 50% are engineering and 20% are science and 30% are humanities.1355

Majors picked by freshmen.1374

What is the actual statistic?1383

The statistic that is going to be made up of the majors picked by the sample.1386

In order to go from this to this, you will need to use inferential statistics.1401

Example 4, a group of pediatricians are trying to estimate the rate of increase in obesity in young children in their city.1410

They begin a research project for every four years a group of 8 year-old children are randomly sampled from the city and weighed.1418

What is the population? What is the sample? what is the parameter? what is the statistic?1425

The population looks like young children in the city, whichever city this happens to be.1431

The sample is the group of 8 year-old children, group of selected to be in this study.1446

What is the parameter? 1469

The parameter would really be the actual rate of increasing obesity and they do not know what that is, they can not get that data.1474

By looking at the different groups of 8 year-old children every four years they could look at the rate between the samples.1490

The statistic would be the rate among the sample, the samples every four years.1503

In that way they will try to use this rate in order to estimate this rate.1521

That is the end of lesson one for www.educator.com.1527

Thanks so much for watching.1530

Hi and welcome to www.educator.com.0000

Today we are going to be introduced to competence intervals.0002

Here is the roadmap for today, first we are going to do a brief overview of inferential statistics.0005

We have been trying to do some inferential statistics but there have been a couple of problems we keep running into.0013

So far I have fudged it.0022

We will address some of those problems head on and come up with 2 solutions.0024

One of those solutions is the competence interval and we are going to talk about competence intervals 0031

when the sigma, population, standard deviation is known and when sigma is unknown.0039

Those are the two situations we are going to be focused on.0046

Let us go over inferential statistics.0049

We know the big picture idea there is some population represented by X and we wish we could know the population but we do not.0055

But instead what we can know is little samples.0065

We could know that but the problem is samples are biased. 0071

Whenever we have samples and we summarize them using these mathematical summaries we call them statistics.0074

Just to give you an example of some statistics there things like x bar or s, those are all statistics.0084

What we would like to do is use these samples to understand something about the population.0093

Statistics, the field is about using these statistics to estimate parameters and 0100

to give you ideas about parameters there are things like mu or sigma.0108

That is our whole goal.0112

Here we realize in order to jump from things like x bar and s to mu and sigma we are going to need more than just wishful thinking.0114

And that is where the sampling distributions come in.0132

Here we talk about sampling distribution often we are talking about some sort of statistic.0135

When we talk about sampling distribution of the mean we are talking about a whole bunch of x bars.0142

Here we have a whole bunch of x.0148

Here we have a whole bunch of x bar and that is the distribution.0150

When we summarize these statistics in the sampling distribution we call them expected values. 0155

So it is not just mu, it is mu sub x bar. 0164

It is not just sigma it is sigma sub x bar. 0168

What we want to do is go from this to understand this but what we have learned 0172

so far is how to see the relationship between parameters and expected values.0177

We know that these things have a relationship to each other.0186

And from doing that we could then make this jump.0189

It is like we use this to say something like this.0195

There are two problems with this picture although it seems rosy and there is still to nagging questions.0199

We would look at them a little bit before but we need to solve this more rigorously than we had before. 0210

One question is this, what happens when we do not know what the population looks like?0217

Of course we could use the central limit theorem when we know mu and sigma from the population.0222

What if we do not know mu?0229

What if we do not know sigma?0231

Then what happens?0233

Also, how do we know whether a sample is sufficiently unlikely because remember the whole point 0234

of the sampling distribution is for us to take sampling distributions from a known population and compare it to an unknown population. 0240

If this sample does not match the sampling distribution enough that it is very unlikely to come from the sampling distribution.0254

We could say this is probably not the population that the sample came from.0261

How do we know when it sufficiently weird?0266

To answer these two questions there is going to be to solutions.0269

You can think of it as this one.0275

This first question roughly, they are both actually are answered in each of these but this one goes along better with that one.0281

This one goes along better with that one.0287

The two solutions are these, one is competence interval.0291

When we talk about competence interval here is what we are doing, we are going to figure out where mu might be from the sample.0302

We are going to try to figure out the population mu from the sample and 0306

that is what we do when we do not know what the population looks like.0336

We try to figure it out from the sample.0342

Hypothesis testing actually takes another view.0344

The hypothesis testing, we come up with a hypothesis for what the population is like.0349

Hypothesize a population mu first.0355

In this case we are saying we are going to pull from something and figure out and pick a potential population mu.0363

And then we are going to test how weird the sample is.0376

We are going to come up with a number to tell us this is how weird the sample is.0387

We are going to decide is that weirdness weird enough?0393

That is going to be hypothesis testing.0398

But we are going to focus here on competence intervals. 0401

Okay, when we talk about competence intervals we need to get an inventory of what we know so far.0404

Basically that is asking the question, which parameters are known or given to us?0413

What happens when we do not know what the population looks like?0418

Well we may not know what 0422

The population looks like because we do not know anything about the population, or we know 0424

Only a little bit about the population.0428

This is the case where we know a little.0431

Here we do not know mu but we do know sigma.0434

For some reason we have some partial information and that helps us out.0444

Here we know nothing.0450

Here nothing is helping us that we do not know mu and we are trying to figure it out but we do not know sigma either.0454

It is like nothing is helping us out here.0464

We just have to pull ourselves up from our own bootstraps.0466

These are the two situations that we are going to talk about it.0471

Here is the goal of competence interval. 0475

The basic idea of the competence interval is going to be this.0480

We are going to try to figure out where mu might be but we do know x bar.0484

We know everything about the sample but we do not know anything about the population.0497

But in this case I am going to show you what happens when we already know sigma.0503

So we have a leg up. 0508

We know sigma life is little that easier for us today.0509

Here is the thing, we do not know what the population looks like so cannot draw a normal or skewed or anything. 0513

We have no idea what the population looks like and we have no idea what the population mu is.0524

But we for some reason know sigma is.0531

Sigma is given to us.0533

From there can we construct an SDOM?0534

Given that n is sufficiently large we can assume that it is normal. 0540

We have no idea what mu is and so we do not know what mu sub x bar is. 0548

We do not know it at all but we can figure out sigma sub x bar.0553

We could figure out the standard error because we have sigma and we could divide that by √n.0559

We have a little bit of information about the SDOM.0566

Here is what we do in competence intervals.0570

First assume that the x bar is the mu sub x bar.0574

Whatever your sample x bar is we are going to put back here.0586

We are going to assume it.0591

Here is why, because we always assume one thing to figure out the other, 0595

here we are going to assume things about the x bar to figure out mu.0601

And hypothesis testing, we assume something about the population to figure out how 0605

Weird x bar is.0609

Here because we know that the SDOM tends to be normal given a sufficiently 0612

Large n what we know is that we can find out with reasonable competence what some 0621

Significant borders are.0632

For instance, let us say we are one standard deviation away.0634

This is raw score and this is z score so we know at one standard deviation away 0642

this base right here we know that that is 68% of SDOM.0650

Let us think about what this might mean.0660

When we get these borders what we might end up saying is that these are the borders in which 68% of our values will fall in the SDOM.0663

And here is what we could say we could also say that there is 68% chance that our 0679

Population mu will fall in that zone.0686

That is a 68% competence interval.0691

For 68% is higher than half, but it is not that high.0697

But here is the thing we can have a high competence interval.0702

We can have a 95% competence interval or we can have a 99% competence interval.0707

That is what we can do. 0713

We can have here is my x bar, here is 0 but what we can do is figure out 0716

These borders such that we are now sure that 95% chance of having our 0730

Population mean fall in this interval.0744

We can know that.0748

That is called the competence interval.0750

That is pretty hypothetically and you can even go to 99%.0753

And we could easily figure out these borders. 0756

Here is how.0759

Because we easily figure out the border we could figure out what the z scores are.0761

This is what we call a two-tailed competence interval because even though the middle part is 95% that does not mean that part of 5%.0772

You will have 105% so that means that part is .025 so 2.5% and this part is .025.0785

And those the only parts that we are not sure.0796

There is a small chance that the population mean will fall somewhere out here but it is a very small chance. 0798

We are trying to reduce it as much is possible.0809

Let us think about how we could find the z score out here.0812

We could use our tables in the back of the book, our z tables and we can look up and usually z tables will give you like one side.0817

We can look up .025 and look at the z score or we could do it on our Excel.0830

Instead of using normsdist, normsdist will give you the proportion of the distribution.0837

We are going to put in normsin as the inverse and here we want to put in the probability.0848

Now this is going to be my probability.0855

I am going to put in this probability .025 and we get 1.967.0870

This value here is -1.96 and because the normal distribution is symmetric we know that this part is also 1.96 0884

but now a positive instead of negative.0892

We know our z values on the end and if we know the z values what is our raw score here?0896

Tell me what this value is and also tell me what that value is.0908

Well the z score tells you how many standard errors away you are.0915

How many jumps away and each jump is worth that much.0921

We are away 1.96 of these jumps.0926

We are going to multiply this by this and then 0931

Either subtract it from x or add it to x.0934

Step two in finding competence interval is let us say you want to find a 95% competence interval finds the z scores.0938

It is all in the case where you know sigma.0953

Step 3 is this, now you want to actually find the actual scores and that is going to be x bar + or -the z score × standard error.0957

That is what you are going to do. 0984

And we know what the standard error is.0986

I am going to rewrite this to be x bar + or - z score × sigma / √n.0989

When we do that we could find these competence intervals.1003

Once you have these competence intervals then you that with 95% competence that 1009

your population mean will fall in this interval between these two numbers.1019

Now the 95% is actually called the capture rate that is like 95% and 99%, whatever. 1028

What would the competence interval be for 100%?1042

It would go from –infinity to infinity because that is how far the normal distribution goes.1047

But the capture rate is this the proportion of random sample for which this interval captures mu.1053

Let us imagine taking a whole bunch of random sample, it is going to be that 95% of the 1080

Time those random samples in tail mu.1091

They somehow overlap with mu.1097

That is what we mean by 95% capture rate.1099

That is when you know sigma but now we do not know sigma.1103

We are in trouble but we do not know mu.1113

We do not sigma either.1115

Still our goal remains the same, we try to figure out mu from x bar.1116

But now we are a little hobbled.1128

I do not have a tool that I use to have.1132

The beginning part of the story stays the same. 1135

The population we have no idea and from there we want to find the SDOM because 1139

we are going to figure out how good our sample is.1146

We know the shape of our SDOM as long as our s is sufficiently big.1151

Can we figure out sigma sub x bar anymore?1157

No we cannot because we do not have sigma so how can figure out sigma sub x bar.1161

We cannot figure out that standard error.1170

Here is where another idea comes in.1171

There is another way we can estimate the standard error of the sampling distribution that is going to be s sub x bar.1175

Because we are going to use the sample standard deviation s instead of sigma.1186

Remember s is more variable, not quite right and because of that we corrected already a little bit by using n -1 instead of n.1200

Here we are going to divide that by √n.1214

If you double click on this you would see the square root of the sum of squares ÷ √ n -1.1218

You would see this inside of that.1231

We already tried to correct it a little bit, but s is still variable.1234

It is not quite as good as having sigma.1242

And there can be other problems that we run into.1245

This is pretty good though and it is a pretty good estimate but you always have 1249

to keep in mind we have not as good of a standard error as we used to.1254

We have to account for that.1262

But the steps remain the same. 1265

First assume x bar for mu sub x bar. 1267

Two, find z for your capture rate.1275

If your capture rate for example 95% then you would find the z scores.1287

It is helpful to memorize that for this capture rate the z scores are going to be + or -1.96. 1297

It is going to come up a lot.1305

Find the z scores for your capture rate.1306

Here we run into a problem. 1310

I wish we could use z scores but here is an issue, we actually cannot because s is to variable for us to assume perfect normality.1314

And because of that we cannot use the z and instead we have to use the t which is very similar to z.1330

Find the t score for your capture rate.1348

Instead of having raw score and z score we are going to find t score.1352

For now you just need to know that you can find your t score in the back of the book but in 1366

The next lesson we are going to go over why you use t and why you cannot use z.1372

That is a big story.1377

You are going to find t.1380

Once you find the t for your capture rate and that will also be + or -, t is going to be very similar to z score.1383

We are going to use this formula.1390

You are going to use a very similar idea to the z score competence interval where you want to know x bar + or -.1396

How a t score is also going to tell you how many standard errors away.1407

T × standard error. 1411

But remember, you use t when you estimate this from sample.1417

If we unpack this, this is what it can look like x bar + or - t × this is that estimated standard error s/√n.1426

It is still the same idea.1443

It is how many jumps away, figuring that out and then multiplying that to the length of the jump 1446

and adding that to x bar for the high-value and then subtracting that from the x bar for the low value.1451

In order to find t here is what you need to know for now.1458

You need to know whether it is a 1 or 2 tailed distribution.1465

If your competence interval is two-tailed then remember these are .025 1470

because you would split the remaining 5% on both side.1478

But sometimes where t values though only give you one side.1482

They might give you a one sided 5% or one sided .25%. 1487

You have to just keep in mind whether it is one tailed or two tailed and also the t distributions are a whole bunch of different distributions.1493

They are a whole bunch of different tables basically.1502

You have to also know what degrees of freedom.1508

For now you could remember degrees of freedom as n -1.1514

There are reasons for all of these things why we use t, why we use degrees of freedom all that stuff.1521

That will be covered in the next lesson. 1528

For now, here is what you need to know.1529

You need to know whether it is one tailed or two tailed.1532

You also need to know degrees of freedom. 1534

Once you have that you could actually look it up in t table usually found in the back of your book. 1536

It might also be called the students t distribution because - invented it but he was actually contracted to work for Guinness.1542

That is why I cannot publish it under his actual name.1553

We published it under the pseudonym student because that is called the students t.1556

You can look up your degrees of freedom and then look for the area that you need and then go down and find the t score.1560

Very similar to z score.1573

Let us go on to some examples.1574

Example 1, consider two extreme situations n=10 and n=1,000.1582

If you use s in the formula for CI given sigma, here is the actual formula for when you have sigma.1591

We use 1.96 because we use the z score.1609

Which of these situations would you expect to give a capture rate closer to 95%?1614

Here is what this question is really asking.1621

When you know sigma for competence interval for 95% competence interval 1.96 that is my z × sigma / √n.1624

What it is asking you is what if you substituted in s?1649

Here we do not know sigma but we are going to just take this formula and use the z value s/√n.1656

In order to answer this question you really only need to keep in mind one thing, when is s more like sigma.1676

S is more like sigma when n is very large.1687

This situation would give you a very close capture rate of 95%.1708

This would be very, very similar. 1721

However, when n is 10 you have more uncertainty and because of that the t distribution it is not as tight.1724

It is actually more like spread out and because of that, when n=10 you do not capture 95% just by being about 2 standard deviations out this way. 1733

That would not capture 95% of those samples.1748

In fact you have to go out further to capture 95%.1753

This is going to be much closer to 95% capture rate.1758

This is going to give you a smaller capture rate.1763

That is because your s is going to be more variable and because of that your t distribution 1766

is going to be more disperse because more variable means sort of wider.1778

95% CI for a population mean is calculated for random sample of weights and the resulting CI is from 42 to 48 pounds.1785

For each statement indicate whether it is a true or false interpretation of the CI.1798

This question is asking you do you understand what the competence interval means?1807

Do you understand what it is for?1811

Let us see, 95% of the weights in the population are between 42 and 48.1813

Does competence interval tell us about the actual population numbers?1821

No, it only tells us about the population mean.1830

This is actually not true.1833

We do not know anything about the actual numbers of the population. 1836

We do not know whether it is skewed, whether it is uniform distribution.1840

We do not know any of those things. 1847

The 95% thing would only be reasonable if the population was normal and its mu was exactly equal to x bar.1848

That would be the case.1862

That is not true.1864

What about number 2?1866

95% of weights in the sample are between 42 and 48, does the CI tell us anything about this sample?1868

No, using the sample to estimate population mean.1878

We are using the SDOM.1882

We do not know anything about the sample itself.1884

That is also not true. 1888

What about number 3?1890

The probability that the interval includes the population mean is 95%. 1893

This is actually true. 1899

There is only a 5% chance that this interval does not contain the population mean.1902

What about number 4?1916

The sample mean might not be in the competence interval.1919

That does not make sense if you look at the picture because we use the sample mean in order to construct the competence interval.1924

Of course this is in the competence intervals and this is just ridiculous. 1932

Example 3, a random sample of 22 men had a mean body temperature of 98.1°, standard deviation of .73.1936

Construct a 95% competence interval for the mean of the population that the sample was drawn from.1950

Interpret the CI and 98.6° included in this.1956

This the average human body temperature.1963

We have body temperatures in the world and we do not know what that population looks like.1965

We are asking can we construct 95% competence interval such that whatever 1975

the population mean is there is a 95% chance that we have covered it.1989

We start by assuming that the mean of the sample x bar is the mean of the sampling distribution of the mean.1994

We have done step one.2004

Step two is we have to construct CI and so here they give us x, but do we have sigma?2008

No.2023

We know that we cannot use the z score.2025

We have to use the t score. 2029

Let us find the t for this.2031

This is .025 chance that we would not find it on the site and here is .025 chance that we can find it on the site. 2033

What is the t scores?2043

This is the raw score or the temperature. 2046

What is the t score for .025 when the degrees of freedom and that is n -1 there is 22 man so 22-1= 21 degrees of freedom. S2049

If you look in your book, at your students t distributions I am going to go down to where the df=21.2065

I am going to go across to where it says you know .025.2074

My table actually gives me this area so I am going to look at .025 on the side.2080

You and it says 2.08 is my t score.2086

That makes sense.2093

That is around 1.96.2095

You will see that as degrees of freedom get greater and greater this value becomes more and more close to 1.96.2098

On this side we know that it is symmetrical so I know it is -2.08.2108

From here I can construct my CI.2114

The CI is going to be the x bar + or – the t value × my standard error.2118

My estimated standard error here is s sub x bar because we do not have sigma.2129

That is going to be s ÷ √n.2137

Let us put in numbers here, so that is 98.1 that is our sample mean ± t value 2.08 × s .73 ÷ √22.2141

I am just going to calculate this on a calculator so that is going to be 98.1 and I will do the + side first. +2.08.2167

Excel does order of operation.2182

It needs to do the multiplication before the addition and its .3 ÷ √22.2185

That is the high-end of my competence interval is 98.4 and the low end is going to be 97.8.2195

98.4 and 97.8 are my CI.2217

When we interpret the competence interval we want to say something like 2229

there is a 95% chance that the mean of the population lies between these two values.2239

Or another way we could say it is that if we draw samples at random, 95% of those samples will include the population mean.2250

95% of the samples in between this interval will include the population mean.2264

Let us think about this competence interval, is it reasonable?2271

Is 98.6° included that is supposed to be the mean for everybody.2280

We see that it is not actually.2286

Maybe this sample is odd because our competence interval does not actually include the mean 2288

that we secretly know for providing temperature of people.2297

That is when competence intervals are helpful. 2307

Here is example 4, in a random sample of 1000 community college students, their mean score on a quantitative literacy test was 310.2310

The standard deviation on this test of all the community college students have taken is 360.2324

Construct a 95% competence interval for the mean of all community college students have ever taken this test.2331

Here is our random sample and their mean or x bar is 310 but the standard deviation 2338

of all the students who have taken this test that is the sigma is 360.2351

Construct a 95% competence interval. 2358

Well, the first part that we know population we do not know but we are given the population standard deviation.2361

And from that, let us construct the SDOM.2374

Well given that this n is quite large let us assume normality.2377

Here we could find out the standard error by putting 360 ÷ √ 1000.2382

Now going to our steps of our competence interval first we assume that x bar is the mean of our sampling distribution of the mean.2395

Here we could use the z instead of t because we have sigma and because of that we know that this is normal. 2412

That is going to be +1.96 and -1.96 in order to construct a 95% competence interval.2425

Our CI is going to look something like this x bar + or – z × standard error.2436

If you sort of double click on standard error what you will find is sigma / √n.2446

Let us put in numbers here.2464

310 is our x bar.2467

Our z score is 1.96.2471

Our sigma is 360.2475

Our n is 1,000.2479

Let us put these in our calculators.2483

I will do the high end first 310 + 1.96 × 360 ÷√1,000.2487

Order of operations says it does not matter anything you multiply or divide it in.2508

That is my high end 332 as the high scoring end.2516

The low scoring end, the lower bound of my 95% CI is 287.7.2524

That is going to be 287.7 as well as 332.3.2537

The mean of the population 95% should fall between this interval.2547

That is the end for our competence intervals.2558

That is part one of competence intervals.2561

Hope you join me for t distributions to find out why we use t instead of z sometimes.2566

Thank you for using www.educator.com.2571

Hi and welcome to www.educator.com.0000

Today we are going to talk about t-distribution.0001

Previously, we learn that there are different situations where we use z and when you use t.0004

Today we are going to talk about when to use z versus t.0011

We are going to break down and sort of reflect and recognize what is z and t?0015

What do they have in common and with is different about them?0022

For certain cases we are going to ask question, why not z why t instead?0024

What does not z have?0031

What is deficient about z? 0033

We will talk about rules of t distribution, they follow certain patterns and t distributions 0035

are a family of distributions separated by degrees of freedom. 0044

Different t distributions have different degrees of freedom.0049

We are going to talk about what are degrees of freedom?0053

We are going to talk about how degrees of freedom relates to that family of t distribution, and then finally summarize how to find t.0056

First off, when do we use z versus t?0065

We covered in the previous sections where we look at whether we knew the population parameters or not.0072

In hypothesis testing, we frequently do not know the mu of the population, but sometimes we are given sigma for some reason or another. 0080

In this case we use z in order to figure out how many standard errors away from the mean we are in our SDOM.0091

But in other situations, we do not know what sigma is.0102

In that case we use t in order to figure out how many standard errors away our x bar is from our mu.0107

Just to draw that picture for you remember we are interested in the SDOM because the SDOM tends to be normal given certain conditions.0118

Although mu sub x bar = mu given the CLT what we often want to know is if we have x bar that fall here or x bar that falls here.0126

We want to know how far away it is from the mu sub x bar.0147

In order to find that we would not just use the raw score and get the raw distance but we would want that distance in terms of standard deviation.0153

But because this is the SDOM, we call it the standard error.0165

We would either want a z or t and these numbers tell us how many standard errors away we are from this point right in the mu.0168

What is the z and t?0181

The commonality as we saw before is it tells us number of standard error away from mu sub x bar and that is common to both.0186

That is what the z score and t score both have in common.0208

Because of that their formula looked very much the same. 0213

For instance, one way we can write the z formula is like this. 0217

We have x bar - mu or mu sub x bar they are the same and this gives us the distance in terms of just the raw values.0231

Just how many whatever inches away, points away, whatever it is.0251

Whatever your raw score means, degrees away divided by standard error.0258

If we double-click on that standard error and look at what is inside than the standard error also written as sigma sub x bar 0264

because it is the standard deviation of a whole bunch of mean = sigma ÷ √n.0275

If we look at the t score formula then we have almost the same formula.0284

We have that distance ÷ how big your little steps are, how big your standard deviations are.0294

But when we double-click on the standard error like something on the desktop, you double-click it and open it up what is inside?0302

Well, you could also write this one as s sub x bar and that would be s ÷ √n.0311

Here in lies this difference right there.0323

That is our difference.0325

Here the difference is that standard error found using the sigma, the true population standard deviation.0327

Obviously if you use the real deal that is better or more accurate than the standard error found using estimate population standard deviation.0345

That is s.0371

S is estimated from the sample and if we double clicked on s it would look like this.0375

It is that basic idea of all the squared deviations away from x bar, away from the mean of the sample.0383

X sub i - x bar2.0395

We have all the squared deviations and we add them up ÷ n -1 because this is our estimate of the population standard deviations 0402

and all of that under the square root sign in order to just leave us a standard deviation rather than variance.0414

This is an estimate of population standard deviation. 0421

It is not the real deal, so it is not as accurate. 0426

One thing you should know is that the z score is less variable and the t score is going to be more variable.0430

That is going to come in to bear on why we use which one. 0438

Okay, so why not z?0443

When we have situations where we do not have the population standard deviation, why not z?0448

Why cannot just be like you are using s, why cannot we do that?0458

Why do we use t? 0466

It is because we use s this is actually something a little bit weird.0468

The weirdness comes from the fact that this s is much more variable than sigma.0474

Sometimes when we get our estimate, our estimate is scat on.0481

Sometimes when we get our estimate it is off.0485

That is what we mean when it is more variable.0489

It is not going to hit the nail and head everything single time.0491

It is going to vary in its accuracy. 0495

Now z scores are normally distributed when SDOM is normal.0497

Here is what this means.0502

The way you can think about it is like this, when the SDOM is normal and we pick a bunch of points out 0503

and find the standard error from those points and plot those, we will get another normal distribution.0516

But that is not necessarily the case for s.0523

Here we need to know that z scores are nice because z scores is going to perfectly cut off that normal distribution accurately for you. 0530

Remember, the normal distribution it always has that probability underneath the pro and it has these little marks.0547

These can be set in terms of z scores.0557

What is nice about the SDOM when it is normal is that when we have the z score it will perfectly match to the proportion of the curve that it covers.0563

This will always match.0579

The problem is t scores do not match up in this way.0581

We can just say why do we just call a t score a z score and still use the same areas underneath the curve?0587

We cannot do that because that is just the superficial change.0600

Here is what we mean by the z scores are normally distributed. 0603

When you get z scores and when we talk about normal distribution, I'm not just talking about that bell shaped curve.0611

Yes overall it should have that bell shaped general shape but it is a little more specific than that.0619

You can have the bell shaped and have the perfect normal distribution.0628

For instance, 1 standard deviation away this is area will give you 34% of the area underneath the curve.0635

This area is about 14% and this area is about 2%.0645

That is a true normal distribution. 0653

This on the other hand, it looks on the surface as if it is normally distributed. 0656

It looks like that bell shaped curve, but it is not. 0662

Here is why.0665

This area, I should have actually drawn it a little bit differently, but I want to show you that do not go by appearances.0666

Appearances can be deceiving. 0677

This might actually be a little bit less than 34%.0678

It might be something like 25%.0685

If that was the case, you would see this area and that area is not 34%.0688

It is 25%.0700

Not only that but this area is now a little bit more than 13 ½, it is around 14%.0701

Now this area is not 2% but 11%.0710

Although it looks like a bell shaped curve, it is not quite a normal distribution because 0715

it does not follow that empirical rule that we have talked about before.0722

What is nice about z scores is that z scores will always fall in this pattern. 0726

These z scores will always correspond to these numbers.0731

That is why you could always use that z table in the back and rely on it.0735

The t scores are not going to do that for you.0739

T scores may not give you that perfect 34, 13 ½ and 2% sort of distribution. 0746

Even though the SDOM might be normal, the t scores are not necessarily normal.0753

We had this normal thing and we have t scores and how do we go from t score's defining this area underneath the curve.0762

That is the problem we have here.0772

It turns out that if n is big then this does not matter as much. 0774

It n is really large, if your sample size is large then the t distribution approximates normal. 0782

It goes towards normal but when n is small, then you have to worry.0788

Also when n is in the middle or when n is big, it is just large.0795

There are all these situations where you have to worry about the t as well as the area underneath the curve.0801

If the t scores are not normally distributed then we cannot calculate the area underneath the curve.0810

If we have our lovely SDOM and we know that the SDOM is nice and normal and we have our mu sub x bar here then everything is fine and dandy.0816

We have x bar here and we want to find that distance, and we find the t score.0832

The problem is we cannot translate from this directly into this area.0838

That is the problem we ran into.0844

Here what we see is this sort of more like a t distribution than a z distribution.0847

I'm just going to call the z distributions to call them basically, the normal distribution.0865

The t distribution is often a little bit smooched.0871

Think of having that perfect normal bell shape.0876

It is squishing the top of it down.0880

It makes that shape ball out a little bit.0882

It is not as sharply peaked but a little bit more variable.0888

We had said the s is more variable than the sigma.0895

It makes sense that the t comes from s is more variable than the z that comes from sigma.0902

You might be thinking what are we stuck?0911

We are not stuck and here is why.0921

He actually worked out all the t distributions as well. 0924

He manually calculated a lot of the t distributions and made tables of the t distributions that we still use today. 0928

He published those tables and under the pseudonym the student.0944

At the time he was working for Guinness brewery and he could not publish because they were sort of like we do not want you to know who we are.0949

Our secrets are very dark beer.0957

He published under the pseudonym and because of that some of your t distributions 0960

in the back of your book maybe labeled the students t to talk about Bosset’s t.0967

Here is what Bosset found, he found that t distribution can be reliable to.0973

You can know about them it is just that you need more information when you need for the z distribution.0980

For z distribution you do not need to know anything.0988

You just need to know z and it will give you the probability.0990

Life is simple.0993

T distributions are not that simple, but not that complicated either.0995

They had a few more conditions to satisfy and the biggest condition that you will have to know is about degrees of freedom.1002

Because for each degree of freedom there is a slightly different t distribution that goes along with it.1012

Let us talk about some of the rules that govern t distributions.1024

The first one you already know as t distribution gets more normal as n gets bigger.1031

This makes sense if we step back and think about it for a second.1039

Imagine if n=n then what would your s be?1042

If your sample is like a entire population then s should be much closer to the actual 1054

population standard deviation much better than when n is small.1071

It is still a little off because of the n-1 thing but it is very close and that is the closest you can get.1077

T distributions are more normalized and gets bigger because s is a better estimate of sigma as n gets bigger.1085

That makes sense.1111

The problem all stems from s.1113

It is variability that as s gets better, less variable and more accurate to the population then t gets better.1116

T is based on s.1128

That is like t distributions are more normalized as n gets bigger.1130

T distributions are a family of distribution.1135

It is not just one distribution.1138

It is a whole bunch of them that are alike in some way and it depends on n.1140

It depends technically on degrees of freedom, but you can say it depend on n sometimes because degrees of freedom is often n -1.1145

There are other kinds of degrees of freedom this is the one you need to know for now. 1154

But later on we will distinguish between different kinds of degrees of freedom.1159

Degrees of freedom is actually important as a general idea it is just the number of data points -1.1163

We have a family of distributions. 1174

They all look sort of a like.1178

They are all symmetrical and they are unimodal and they have that bell like shape, but they're not quite normal. 1179

Not all of them.1190

As n gets bigger, or as degrees of freedom gets bigger the distribution becomes more and more normal.1191

Let us step back and talk a little bit about degrees of freedom first.1201

Let us assume there are three subjects in one samples so n=3.1207

We know that just by the blind formula n -1 degrees of freedom is 2 but what does this mean?1213

Here is the thing.1224

Let us assume there are three subjects in one sample and let us say it is some score on a statistics test.1228

They can score from 0 to 100 and if I say pick any 3 scores you want and that could be those subject scores.1235

Your degrees of freedom would be 3.1244

You are free to choose any 3 scores.1246

You are not limited.1249

You are not restricted in any way.1250

If you figure out any sample statistic, let us say the mean or variance.1253

If you figure out any sample statistic then if you randomly picked 2 of those scores you can no longer just pick the 3rd score freely.1261

You have to pick a particular score because you already used up some of your populations for the mean.1274

The mean will constrain which two scores you could pick.1283

This logic will become more important later. 1288

Let us put some numbers in here.1292

Let us talk about the case when n= 3 and degrees of freedom = 3.1294

It would be like there are three subjects and they could score from 0 to 100.1299

I am totally free.1310

I can pick 87, 52, my last score I can pick anything I want.1314

I can pick 52 again, 100, or 0.1321

It does not matter.1325

I can just pick any score I want.1326

If I erase these other scores I will just put in a different score.1328

It does not matter. 1333

I'm very free to vary.1335

But let us talk about the most situations that we have in statistics where we figure out summary statistics. 1337

Here we have n=3 and degrees of freedom =2.1345

Here is why.1350

The score is the same, it can go from 0 to 100.1351

We also found the x bar =50.1358

If we found that the x bar = 50, then we cannot just take any score all 3 times.1363

Can we pick any score for the first one?1374

Yes I can pick 0.1377

Can I pick any score for the 2nd one?1379

Sure, I can pick 100.1383

Now that third score I cannot take any score.1386

If I pick 72 my mean would not be 50.1392

If I pick 42 my mean would not be 50.1394

If I pick another 0, my mean would not be 50.1399

That is the problem and because of that if this is my data set so far I have been free to vary.1403

I freely chose this guy but this last one I am locked in.1410

I have to choose 50.1415

That is the only way I can get a mean of 50.1417

That is what we call degrees of freedom.1420

This logic is going to become more important later on, but for now what you can think about is 1423

because we are deriving other summary statistics from our sample we are not completely free to vary.1429

We locked ourselves down. 1437

We pinned ourselves down and built little gates for us at the borders.1439

Now you know degrees of freedom and we know as degrees of freedom or n goes up we see more and more normal like distributions.1445

I have drawn three distributions here for you.1460

Here you might notice that I have used basically the same picture of a curve for all three of these.1462

You might think they have all the same distribution.1469

Not true because you have to take a look at the way that I have shown you that t down here.1473

The way that I have labeled this x axis or t axis in this case is really to change our interpretation of these curves.1482

Remember what the normal distribution says.1493

The normal distribution says 1 standard deviation to the right or positive side, 1 standard deviation 1496

to the negative side that area should be about 68% of your entire curve.1502

Is it true here?1507

No it is not, this does not look like more than 50% of the curve.1510

This looks like maybe 1/3.1521

Maybe a little less than 1/3.1526

This is starting to look more like 60% of the curve, but still maybe not quite 68% of the curve.1528

It is still only looks like may be 50% of the curve or a little more.1539

Imagine that this was shifted in the middle this would be more like 68% of the curves.1544

Something like this would be more like 60% of the curve.1560

That is how you can see that as your degrees of freedom increases it becomes more and more normal.1567

Even this is not quite normal. 1582

This is not quite 68% but a little bit less actually.1585

As the DF gets bigger and bigger that area starts to look more and more like the normal distribution.1588

Now there is another way I can draw these pictures and I believe in this other way you can see more easily how helped this is more variable version.1598

Remember I am saying that t distribution is like you are stomping down on the peak of it and smooching it out a little bit.1615

I believe that if I draw the same picture in a slightly different way you will see why. 1624

In this case, here is what I have done. 1630

I have kept the t axis the same and now it is labeled in the same way, but I have drawn these distributions in a slightly different way. 1634

Now this one is a little wider and this one is less wide and this one is even less wide.1647

It becomes more narrow, more like the normal distribution.1656

Notice that if I drew the line here, a little bit after 1 standard deviation away we see they are a little of that curve on the side.1661

You know if that is 50% and maybe 15%, 10%, something like that. 1675

This might look more roughly equivalent to this, maybe a little bit less.1685

Maybe like 20%.1693

This looks like much more than this.1695

Maybe this is like 25 or 30% even compared to this.1700

In that way you can see using the same concepts the drawing and picture in a slightly different way that this distribution is much more variable. 1706

It is spread is very wide. 1719

Where is this distribution is much less variable?1721

Remember t is all because of the variability found in s.1725

When s is very, very variable and n is very small, s is very variable, so the t distribution is also quite variable.1731

As s n gets bigger, s gets more and more accurate, more like the actual standard deviation of the population.1741

And because of that, it becomes more and more normal.1752

Let us break this one down.1755

In degrees of freedom of 60, here is what it might look like.1761

It might look something that is very close to our 34, 13 ½ , 2% normal distribution.1769

If we drew our little lines there, that would probably look very close to this picture.1777

It looks pretty close.1792

When we draw something like this, this area might only be 25% of this whole curve.1797

This other areas also combined 25%.1810

If I split this like this, then this would be something like 14%. 1817

A little bit less than this but still quite a bit.1826

This one might even be more than 14%, maybe like 18%. 1832

As you can see that in this distribution even though I have drawn it like this and just labeled it differently. 1840

In reality, it will look more like this if you kept this t axis to be constant.1849

It will look sort of smooched out.1855

How do you find t at the end of the day?1859

How do you find the t and not only that how do you find the probability associated with that t?1867

For instance, where t is greater than 2?1874

How do you find these probabilities?1878

We know how to do it for z but how do you do it for t?1881

One thing that you could do is you can look at the back of your book usually in the appendix section 1884

there is something called the t distribution or the students t distributions that you can look at.1892

Oftentimes it will have degrees of freedom on one side like 2, 3, 4, 5 all the way down and then it will show you either one tailed or two tailed area.1898

It might give you .25, .10 and .05, .025.1914

It might give you these areas.1926

The number right here tells you the t score at that place.1929

If you wanted to know where the 25% cut off is, what this t score is for degrees of freedom = 2 distribution and you would look right here.1935

If you wanted to know it for .025 then you would look here.1962

You want to look for degrees of freedom, as well as how much of the curve you're trying to cover.1975

That is definitely one way to do it.1984

The other way you could do it is by using Excel and just like how Excel will help you find probabilities 1988

and z scores for the standardized normal distribution you can also find it in Excel for the t distribution.1995

It needs a couple of hints.2003

Let us start off with tdist.2006

Tdist is the case where you want to find the probability but you have everything else.2012

What the tdist will do is if you put in the degrees of freedom and you put in the actual x value.2019

You can think of the x value as the t value and it will only take positive t values.2033

For instance, a t value of 1 and the number of tails if you want this entire area or you just want that area alone.2039

You can either put in one or two then it will give you the probability of this area.2058

I can show you right here.2066

Let us put in tdist for t(1) and degrees of freedom 2 and let us look at what it might say for two tails.2070

It will say 42% and if you look at this exact same thing, but if you look at it for one tail it will just divide this area in half.2098

21% and 42% makes sense.2111

Basically this is giving you this area + this area if you want 2 tails.2116

But if you only want one tail it will just give you this area.2122

We know that for 95% competence interval we often use z score of 1.96 and that will give us a tail of .025 or if we count two tails 25%. 2125

Let us see what this gives for 1.96 when we have a degrees of freedom of only 2.2149

Let us put in 1.96.2158

If we put that in our z score, if we put in 2 tails we would only get 5%, but let us see what we get here.2163

Degrees of freedom 2 and number of tails let us put in 2.2173

Do you think this should be more or less than 5%?2179

Let us think about this.2183

The t distribution is like slightly smooched, it is more spread out and because of that it is going to have this longer tail.2186

It is not going to be nice and all compact in the middle.2196

It will be spread out.2200

We would imagine that it have a fat tail.2202

I would say more than 5%.2204

We see that it is almost 20% a t of 1.96.2207

Let us put that same z score in.2216

Normsdist this is whenever we want the probability and put a 1.96.2218

Here we get the negative side, so we want 1 - and this gives us just 1 tails.2227

I am going to change this to 1 tail, so we could look at it. 2241

Here on one of our tails, one side of it, it is almost 9 1/2% is still out there.2245

But when we use the z score only 2 1/2% are still out there.2253

Let us look at the same t distribution for a very high degrees of freedom. 2258

Let us try 60.2271

Even with something like 60 we are starting to get very close to the z distribution, but still this guy is more variable than the z distribution. 2273

Let us see if we could go even higher. 2287

Instead of 60 I am going to put in 120.2289

Notice we are getting closer but still these are more variable than these.2294

Let us go a little less.2303

Let us go like 1000 and see what happens there.2304

We are getting close but still slightly more variable.2309

That is a good principle for us to know.2316

The t distribution although it approximates normal, it approximates it from one side.2318

Here is the normal distribution standards .02499.2324

There it is and it is getting closer and closer to it, but it is approaching it from the high-end.2329

These numbers are dropping and getting really close to that, but not quite hitting it.2336

Now you know how to get the probabilities but what if you have the probably and you want to find the t score?2345

What would you do?2355

In this case, you would use the inverse t in for inverse.2356

Here you would put in the two tailed probability.2362

Let us say we want to know what is the t boundary for if we wanted only 5% in our tails?2366

Here is the situation I am talking about for this one.2374

We had this distribution and we know we want these to be .025, just like a z distribution.2378

We want it to .025 but we want to know what these numbers are here.2393

We want to know what these numbers are.2398

It depends on your degrees of freedom.2403

Let us try degrees of freedom of 2, 60, 120, and 1000.2405

Let me label this.2413

Here we get the probabilities from t dist and here are the probabilities from standardized normal distribution, or the z distribution.2424

We do not want the probabilities we actually want the t boundaries themselves and the z boundaries themselves. 2443

If we want the z boundary at .025 or at 5%, we would use normsin and we put in our probability.2460

I forget if it is one tailed or two tailed.2472

Let us try one tailed but we would need two tails.2474

We get very close to -1.96.2477

We just have to memorize that but that is why this is saying at -1.96 you have about 2 1/2% in that little tail. 2489

Now what about the t?2501

In Excel it is inconsistent because z it gives it to you on the negative side, for the t it only gets 2 for the positive side.2504

That is confusing but I often do not memorize that.2512

I just try out a couple of things until it spits out the thing I'm looking for.2515

You have to understand how these things work so that you could predict what's going on. 2520

We will use t inverse and we want to know the probability and I believe this is going to be two-tailed.2527

.05 and degrees of freedom of 2.2538

We get .05 and degrees of freedom just to test whether this is one tailed or two tailed.2543

Let me put that in.2563

I believe you have to give it two tails.2565

You have to put in the two tails probability here so that is .05 and the degrees of freedom 2 and this will give us these boundaries.2570

This will only give us the positive boundary, but because it is symmetrical, you automatically know the other side.2580

This would give us a boundary of 4.3.2589

Remember for the z score this boundary will be 1.96 but for a t distribution with the degree of freedom of 2, this would be 4.3.2593

That is quite high because remember it is really spread out.2604

You got to go way out far in order to get just that 2%.2609

What about this boundary for degrees of freedom of 60?2612

What do we get then?2621

We get something very close to 1.96 but it is a little bigger than 1.96.2624

Remember because the t-distribution is more variable you to go farther out there in order to capture just that small amount of .025%.2630

That mean 2.5% or .025.2641

If we go to 120 we should expect is that boundary to come closer and closer to 1.96 from the big side, but not quite hit 1.96 or more closely 1.9599.2646

We are getting close to that 1.96 number, but still it is a little bit higher.2671

Finally we will go buck wild and put in degrees of freedom of 1000 we get something very close to 1.96 but still little the higher than 1.96.2678

Those are two different ways that you can find the t, as well as the probability that t is associated with.2692

Remember the degrees of freedom and you have to know whether you want two tailed probability or one tailed probability.2701

As well as your degrees of freedom. 2714

That is what you will have to know in order to look things up on a t distribution.2717

Let us go on to some examples.2722

In each of these situations which distribution do you use, the z or the t?2729

Is there a 500 million people on Facebook how many people have fewer friends than Diana, who has 490 friends?2734

Assume that the number of friends on Facebook is normally distributed and here they give you the sigma.2742

We know that you can use the z distribution here.2749

Here the researchers want to compare a given sample of Facebook users average number of friends 25 to the entire population. 2753

What proportion of sample means will be equal or greater than the mean of this group?2763

N = 25, but the mean is 580.2772

They have an average of 580 friends.2779

Here I definitely would not necessarily use z but I also do not have the standard deviation.2783

Maybe this is connected to the previous problem.2796

If so, if I assume that they come from the whole population and they give us the information for the whole population here.2800

If sigma = 100 then I will use z.2811

This one I probably left out some information.2816

What about this last one? 2820

Researchers want to know the 95% competence interval for tagged photos given that a sample of 32 people 2822

have an average of 185 tagged photos and a standard deviation of 112.2829

Here it is very clear, since I know s but I do not know the sigma for tagged photos.2835

I only know the sigma for friends, but not for tagged photos.2844

In this case, what I would do is use the t distribution because I will probably have to estimate 2848

the population standard deviation from the sample standard deviation.2855

Example 2, what we get is that problem and we just have to solve it. 2860

There are 500 million people on Facebook but how many people have fewer friends than Diana?2869

Here it is good to know that we do not need a sampling distribution of the mean.2874

We do not need the SDOM.2880

In fact, we are just using the population and Diana.2882

We could draw the population and it tells us that the population is normally distributed.2886

Number of friends is normally distributed and so the mu = 600 and a standard deviation is 100.2895

This little space is 100 so this would be 700.2914

Diana has 490 friends so here would be 500.2920

It is asking how many people have fewer friends than Diana?2929

How many have that?2937

It is tricky because this will give us the proportion but it would not give us how many people?2940

What we will have to do it multiply that proportion to the 500 million.2950

This is all 500,000,000 and that is 100%.2956

We will need to know some proportion of them that have friends fewer than Diana, fewer than 490.2961

We will have to figure that out and so we will have to multiply 500 million by the percentage. 2974

Let us get cracking.2981

We can figure out the z score for Diana and that would be 490 - 600 and ÷ 100.2984

I only need to do standard error if I was using the SDOM but I am using the population standard deviation.3010

That is often helpful to draw this.3016

Here we have about 100 ÷ 100 = -1.1.3018

The z score of -1.1 and I want to know the proportion of people who have friends less than Diana.3031

You can look this up on the back of your book, so I would just look up the z score of -1.1 or you could put it into Excel normsdist -1.1.3046

I should get about .1357 so that would be .1357.3065

That is about 13 ½ % of the population have fewer friends than Diana.3081

What I want to do is only get 13% of these entire populations and that would be 500 million × .1357. 3089

You can do this on a calculator, so that × 500 million = 67.83 million.3103

Do not forget to put the million part.3117

It is not that you only have 67 people who have fewer friends than Diana.3124

That would be our answer right there.3129

The researchers want to compare a given sample of Facebook users average number of friends a sample of 25 to the whole population.3132

What proportion of sample means will be equal or greater than the mean of this group?3146

Here I'm going to assume because there is no other way to this problem.3159

I am going to assume that we could use the information from example 2 because we are talking about the same thing, the number of friends.3165

We actually know the population.3173

The population is approximately normally distributed with the mu of 600 and standard deviation of 100.3176

Mu= 600, standard deviation=100 and from this I need to generate an SDOM because 3195

now we are talking about samples of people not just one person at a time. 3205

Because of that I need to generate SDOM for n = 25.3211

The nice thing is we already know the mu sub x bar = mu that is 600 but we actually also know 3216

the standard error because standard error is standard deviation ÷√n.3234

In this case, it is 100 ÷ √25 =20.3240

1 standard error away here is 20. 3246

This would be 580, 560, and so forth.3255

It is asking what proportion of sample means will be equal to or greater than the mean of this group?3262

Equal to or greater than means all of these and they are just asking for proportions we do not have to do anything to it once we get the answer. 3271

Well, it might be nice if we could actually get the z score for this SDOM.3281

Here, instead of just putting 580 I would want to find the z score here.3290

Here are friends but I want to know it in terms of z score. 3296

It is actually really easy because it is the z score of -1 and we can actually just use the empirical rule to find this out because we know at the mean, 3303

at the expected value we know that this is 50% and this is 34%. 3318

If we add that together, the proportion of sample means greater than or equal to the mean 3327

of this group that = the proportion where z score is greater than or equal to -1 and that is .84%.3341

Final example, researchers want to know the 95% competence interval for tagged photos given that 3357

a sample of 32 people have an average of 185 tagged photos and a standard deviation of 112.3366

Interpret what the CI means.3375

Here we do not know anything about the population, but we do know x bar which is 185 3377

and we do know the standard deviation of the sample s which is 112.3386

We also know n is 32.3393

Remember when we talk about competence interval we want to go from the sample to figure out where the population mean be.3396

What we do is we assume that we are going to pretend SDOM here and we assume that the 3408

x bar is going to equal the expected value of this SDOM which is 185.3420

From there we could actually estimate the standard error by using s.3428

Here mu sub x bar = 185 this is assumed not sigma but x sub x bar is s ÷ √n=112 ÷ √32.3438

If you pull up a calculator you could just calculate that out 112 ÷ √32 and get 19.8.3460

We know how far the jumps are and because we used s we cannot just find the z score we have to find t score.3477

We will have to use t score in order to create a 95% competence interval.3496

Although I do not know what the t distribution for the degrees of freedom of 32 – 1.3504

I do not know degrees of freedom of 31 t distributions looks like.3515

We will have to figure that out.3520

What we eventually want is this to be .025.3524

These are together a combined two tailed probability of 5% and we will have to use t inverse because we already know the probability.3531

We want to go backwards to find the t. 3544

T inverse and we put in our two-tailed probability .05 and put in our degrees of freedom, which in this case is 31.3548

We ask what is the t and it says it is 2.04.3559

The t right here at these borders is 2.04 and because it is symmetrical we also know that this one is -2.04.3564

In order to find the competence interval we are really looking for these raw values right here.3577

In order to get that we add the middle point and add 2.04 standard errors to get out here and we subtract out 2.04 standard errors to get out here.3587

The competence interval will be x bar + or - the t score. 3605

How many jumps multiplied by how big these jumps actually are and that is the score right here multiplied by s(x).3615

If we actually put in our numbers that is going to be 185 + or -2.04 × 19.8.3627

If you just pull out a calculator we could get 185. 3638

Make sure to put that = 185 even I forget sometimes +2.04 × 19.8 and remember Excel knows order of operations. 3643

It will do the multiplication part before it does the addition part,3660

The upper limit will be 225.39 and the lower limit will be 144.61.3664

I just rounded to the nearest tenth and this would be 225.4 and this would be 144.6. 3683

We need to interpret what the CI means. 3697

This means that there is a 95% chance that the population mean will fall in between 144.6 and 225.4 that is the interval.3705

That is it for t-distributions. 3721

Thank you for using www.educator.com.3724

Hi and welcome to www.educator.com.0000

We are going to be talking about hypothesis testing today.0002

The first thing we need to do is situate ourselves where do hypothesis testing fit in with all of inferential statistics.0005

We are going to talk about how to create the hypothesis that we are going to test and that hypothesis is going to be about a population.0015

When we say about a population we mean about population parameters.0023

There is actually two parts to any hypothesis that we test.0028

There is the no hypothesis and the alternative hypothesis.0033

We are going to talk about how they fit together.0036

We are going to talk about potential errors in hypothesis testing because it is good to know going into it.0039

Finally, we are going to end with the steps of hypothesis testing and we are going to do the steps of hypothesis testing, 0045

When sigma the population standard deviation is given and when it is not given.0051

And if you had just refresh yourself with the confidence interval lesson, 0057

You can probably guess that when sigma is given we are going to be using z distributions or normal distributions.0063

When sigma is not given and we have to estimate the population standard deviation from the sample using s then we will use t-distributions.0072

In order to use the t distribution we need to figure out the degrees of freedom.0086

Let us go back and situate ourselves with all of inferential statistics.0094

Basically the idea of inferential statistics is that we use some known populations to figure out the sampling distribution.0101

The one that we are using a lot is the SDOM.0115

We are going to use the another one later.0121

We figure out sampling distributions and now we want to compare a sample from an unknown distribution.0123

We want to compare sample from that to the sampling distribution. 0136

If the sampling distribution says the sample is very likely then we might say maybe the sample, 0145

this unknown population is very similar to the known population.0154

But if the sampling distribution tells us the sample was very unlikely then we could rule out 0159

the known population as a potential candidate for this unknown population.0169

In doing all of this in inferential statistics there are two issues that come up.0176

What happens when we do not know what the population looks like at all and 0183

We want to try to figure out where the population mean or different parameters of the population might be.0188

In that case we use confidence intervals and when we use confidence intervals we try to figure out where mu is from x bar.0195

Another way of thinking about it is we try to figure out something about the population 0211

From the sample information because we have that sample information. 0216

Another technique that we could take is that we could use this idea and say how do we decide when a sample is unlikely?0220

How do we decide when to draw x?0235

When do we decide this side is weird?0238

In order to do that we now have to learn about hypothesis testing.0243

The goal of hypothesis testing is to be slightly different from confidence interval yet related.0248

It is the flip side of the coin. 0254

Basically, you are going to try to figure out whether your x bar is unlikely given a hypothetical population.0256

In that case, what we are doing is we are setting up a population.0281

It is like the population is stable and we are going to compare the sample to it.0290

Here is our sample and here is our set standard.0296

Here the population is moving but this is the target and this is what we use to get that target.0305

Here this is already set and we are comparing this guy to this guy.0316

In this way you need both confidence intervals and hypothesis testing to give you the full story. 0323

You might also hear that hypothesis testing another word or phrase for it will be a test of significance.0331

A lot of students misinterpret that to be a test of importance.0343

That is the modern way the word significance is used but that is not actually what we are talking about here. 0348

When we call this at test of significance this is actually using the meaning of significance 0354

from the early 20th century when this test was actually invented.0367

Back then significant adjustment prominence or standing out.0370

I like to think of it as being weird like how much does this sample stand out?0377

Is that significant?0386

Is it prominent and different or is it very, very similar?0387

Those are the ways you could think about it. 0392

I do not want to think of it as a test of importance.0398

Now that we know why we need hypothesis testing, how do we hypothesize the population?0401

How do we make up a population?0411

Do we have to make up all the individual numbers of the population?0413

What do we got to do?0415

Here is the thing, we could assume things about population parameters and test those assumptions. 0417

We do not have to stimulate every single member of the population we could just make some assumptions about parameters.0424

In order to set up a hypothetical population you set up a parameter. 0431

For instance, you say mu is equal to something.0437

That is how you set up a population then check whether our sample is likely to have come from such a population.0440

In doing this we need to figure out how to we hypothesize rigorously so that we could get as much paying for our book from our hypothesis?0448

In order to do this we have two parts to a hypothesis and this is going to make our hypothesis better.0462

The first part of hypothesis is what we call the null hypothesis and null means 0 or not important.0472

The null hypothesis in this case is your hypothetical population.0487

We write the null hypothesis like this h sub 0 or h sub knot.0492

We might say mu= 0.0502

We have created a null hypothesis. 0507

I just made up to 0 but there are better ways of doing this and we will talk about those later.0510

We could also write this in terms of standard deviation or other things but frequently 0516

you will see the mean being the hypothesis of the population.0532

The alternative hypothesis is what do we learn if this is not true?0536

If we rule this out then what have we learned?0544

In that way these two make up the full hypothesis. 0548

If we find this then we learn this.0554

If we do not find that we learn this other thing.0557

What we learn if this is not true is at least that mu does not equal 0.0560

This is called the alternative hypothesis and it helps us at least figure out something when we do not figure out that.0566

If we do not find this to be true at least we find this to be true.0575

If this is not true then we will always find this to be true.0580

These two hypotheses together this is more powerful than just having one hypothesis alone.0584

We will talk a little bit about why and it goes back to that idea of the test of significance.0597

Hypothesis testing or the test of significance is a test of weirdness.0607

It tests how weird the x bar is.0617

This is the question that it can answer is the x bar weird?0625

Is it different from the population?0633

But it actually tell is x bar very similar to the population?0636

That is not what number gives you but only tells you how weird it is.0642

It does not tell you how similar it is.0646

These are actually not flip sides of the same coin and because of that our goal here in all 0648

of hypothesis testing is we find out the most when we reject the null hypotheses.0658

That is when we would find out the most.0668

This may not seem like we are finding out of luck because we ruled out 0.0671

There is an infinite number of mu that we need to test but actually in hypothesis testing 0676

what you want to do is reject the no rather than accept or fail to reject the null.0682

Just because it is set up as a test of weirdness that is the only thing you can find out.0690

It is true that it would be nice if we can find out more than that but that is the limitation of this hypothesis testing.0695

It is a limitation that is also like the fact of life because even as the limitations this hypothesis testing still a powerful tool.0703

But it is good to keep in mind that this one is a limitation.0714

A little bit more about these two hypotheses. 0716

These two hypotheses, the null and the alternative, sometimes you might see the alternative written as h sub 1.0722

They must be mutually exclusive. 0729

This means if one is true the other cannot be true.0732

If the other is true, the first cannot be true.0736

You cannot have a null hypotheses and alternative hypotheses like mu=1 and mu=2.0739

They are not mutually exclusive.0748

If one is false, the other one does not have to be true.0751

It could be true but it does not have to be.0755

Whereas mu does not equal 1, mu = 1.0758

Those are mutually exclusive. 0763

If you rule out one you absolutely know that the other one has to be true.0764

Together they must include all possible values of the parameter.0768

You can think of the parameters such as mu on a number line and you need to cover the entire number line.0774

You can have a null hypothesis like mu > 0.0782

You might say mu >0 but then your alternative hypotheses have to be mu < or = 0.0788

You color that in and color all of that in too because that is where you will cover the entire space, the parameter space.0800

If these are both true, here is what you get.0811

One of these two hypotheses must represent the true condition of the population.0815

You find out something that is true about the population and then as we said before, 0821

typically in research your goal is to reject the null and find support for the alternative hypothesis. 0827

You can actually prove the null hypothesis but you can reject the null hypotheses.0833

And the whole reason is because hypothesis testing is a test of significance or test of weirdness.0838

This x bar stands out.0848

You can only tell me whether it stands out a lot from the population or not.0851

They can tell me it is probably similar to the population.0856

You cannot tell me that part.0860

Let us talk about some errors that we could potentially make in hypothesis testing. 0862

There are some foibles, you need to watch out for.0868

Well first, it helps to imagine that there are two potential realities and we do not know which one of them is true.0871

One is that the null hypothesis is true.0883

It is actually true.0887

We do not know yet, but it is true.0888

Other possible reality is that the null hypothesis is false.0892

Your sample did not come from the population.0898

Those are your two possible realities but only one can be true at any given time. 0901

You cannot have both the null population being true and false at the same time.0907

You got to have one or the other.0915

These two boxes, this one and this one have to add up to 100%, but these two boxes , this one and this one have to add up to 1.0916

That is because we have a 100% possibility of this being true and 100% possibility of this being true.0934

If this is true then this is not true.0942

Given that this is reality but we do not know reality, what is the deal?0944

How do we put that together with hypothesis testing?0955

When we do have hypothesis testing we have 1 of 2 outcomes. 0959

We could either reject the null successfully, that is what we wanted to do.0964

We could either reject the null or we can fail to reject the null.0968

We do not call this accept the alternative or accepting the null.0972

We call it failing to reject because that is how much we wish we could have rejected the null.0980

We failed to reject the null. 0987

Let us think about these two decisions in conjunction with reality. 0989

Here is the thing, when we reject the null hypothesis and say this sample did not come from the population. 0997

If it did not come from that population we would be correct here. 1006

This would be a correct decision. 1011

If this is our decision and this is indeed the world we live in, this is a correct decision.1014

If we fail to reject the null however but the null is actually true we should not have rejected it 1021

then this also represents a correct decision.1034

Good job not rejecting the null because it is right all along.1039

These two are ways that we could be correct.1044

That leaves us two ways that we could be incorrect. 1048

One way is this, we could successfully reject the null but the null is actually true but 1051

we said that it is false but the null is actually true. 1063

This is an incorrect decision.1068

We call this a false alarm because we are rejecting that now.1074

It is false alarm we should have not rejected that null. 1084

The probability of that false alarm is represented by the term alpha.1088

On the other hand, there is another way that we could be wrong and that way is this.1097

We could fail to reject the null.1107

We could say we may not be wrong. 1109

We fail to reject it but the null is wrong.1114

This is also an incorrect decision.1121

This is not called a false alarm instead it is called a miss.1127

This is going to be called the beta rate.1134

Obviously the alpha and the beta have a probability of less than 1, but greater than 0.1143

What we want to do in hypothesis testing is reduce our chance of errors.1150

We can also figure out what is our probability of getting different kinds of correct decisions?1157

We know that this is one version of the world and that should add up to 100% this probability of failing to reject when we should have kept it around.1167

This probability is 1 – alpha.1183

This is what we call a correct failure.1188

It sounds odd but it sounds good that you have failed.1198

You failed to reject it and you should have failed to reject it.1203

It is like you failed to reject a date and you know that date was really good.1208

He is a good guy so you should have failed to reject him.1216

On the other hand, this is another possible set of what could be right in the world.1225

This should add up to 100%, so this should be 1 – beta.1232

That is our rate of correct decision where we successfully rejected the null and it is indeed false.1238

In dating it might be reject somebody who comes up to you and good job you should have rejected them.1245

They are a total loser.1253

That is what we call a hit.1255

It is like in a battleship when you hit it.1258

This is the hit rate, miss rate, false alarm rate, and the correct failure rate.1263

Let us talk about the steps of hypothesis testing. 1272

Well there are going to be 5 steps.1281

The first step just starts out with setting up your hypothetical population.1284

This is the hypothetical population and you need to create both a null hypothesis and an alternative hypothesis then pick a significance level.1290

You can think of the word significant as a stand outness like how much it standout.1304

How much does it have to standout?1310

When it stands out a lot you have a very low false alarm rate.1313

If your x bar is out there and then you have a small chance of false alarming.1318

You are saying this really does not look like it belongs in the population because it is so out here.1326

And that is where your false alarm rate is low. 1335

You want to set a low one. 1338

If you want to be more conservative, you want to set an even lower false alarm rate. 1340

For instance, alpha = .01 that would be even lower rate of false alarm.1344

Then you want to set a decision stage.1351

So far, we have not done anything except like setting things up yet and still we are setting things up.1355

We set up the decision stage and what you want to do is draw the SDOM, the sampling distribution.1361

We have the hypothetical population and we create a sampling distribution so that we can take our sample 1368

and compare it to that sampling distribution. 1375

You draw the SDOM and you identify the critical limits.1378

Here is my SDOM and you want to identify the extreme regions where you say if your x bar 1383

is somewhere out here then you want to reject the null.1396

You want to say it is very, very unlikely to have come from this null population.1402

Then choose a test statistic because the test statistic will tell you how far out from the mean it is in terms of standard error.1407

How many jumps out you are?1419

This will be called choose a critical test statistics.1421

You are saying what are the extreme boundaries such that if x is outside those boundaries we reject it.1429

If it is inside the boundaries we do not reject.1440

And then we use the sample. 1444

This is the first time we are doing anything with the sample.1447

We use the sample and the SDOM from here to compute the sample test statistic and p value.1450

And the p value is going to tell you given that x is out here how much of that curve does it actually cover?1458

What is the probability of false alarming there at that particular value?1468

And then you compare the sample to this SDOM population and you decide to reject the null or not?1476

One word about p value versus alpha.1487

The p value is going to be the probability of belonging to the null population given sample x bar.1494

What is the probability that this value belongs in here?1513

Alpha is what we call the critical limit. 1519

This is what we are able to tolerate we just set it.1526

Alpha is often decided just by the scientific community. 1532

In fact alpha is often set to something like .05 or .01 because that is commonly accepted in scientific communities.1536

We call that just being by tradition or convention.1546

It is not that we figured out the alpha level.1550

On the other hand we figure out the p value level given our sample x.1553

And what we want is for the p value to be lower than the critical limit.1559

Let us go through some examples.1566

Here is an example of single sample hypothesis testing, also called t tests of 1 mean or single mean t test.1572

This is also another term for it.1594

Let us talk about this when sigma is available. 1597

The population standard deviation has been given to us.1601

Here it says that the average Www.www.facebook.com.com user has 230 friends, a sigma of 950, a random sample of college students n=39 showed that the sample mean was 393 friends.1605

Our college students like the average www.www.facebook.com.com user.1620

Let us try to think about this by using hypothesis testing.1624

The first thing is perhaps we should set up the best standard population as the average www.www.facebook.com.com user,1631

the real population of all Www.www.facebook.com.com users.1643

Our null hypothesis might be something like mu= 230.1648

That the null hypothesis is that our college students sample is just like everybody else. 1655

The alternative hypothesis is that our samples are not similar to that population. 1667

Let us set the significance level. 1678

Here we could just use alpha = .05 by convention.1683

We could say that is traditional, we will use that too.1693

Let us set the decision stage. 1698

Here we want to start off by drawing the SDOM and I like to label for myself that it is the SDOM 1701

just so that I do not get confused and mistake it for the population or something like that.1711

We want to draw a critical limit.1717

If this is the only false alarm that we are willing to tolerate then we might say everything out here we reject.1721

Everything out here we reject.1730

That would mean that everything in here is 95% and out here these two regions together add up to 5%.1734

Because we are going to reject it there is still some probability that this sample belongs to the population.1745

But we are going to reject the null.1751

We need to split up 5% distributed to both sides so this would make this 2.5% and this would be also 2.5%.1754

That is the error that we are going to tolerate.1768

I will color n right now my rejection regions so that means if it out here in the extremes I am going to reject my null hypothesis.1771

And because we know that this SDOM comes from the population, that is how we are creating this SDOM.1783

We know that the mu of SDOM is exactly equal to the mu of the population so that will be 230.1792

Mu sub x bar = 230.1801

We can also figure out the sigma sub x bar and that would be just sigma ÷ √n Which is 950 ÷ √239.1805

You could just pull out a calculator to do this.1819

I am just going to use the blank Excel file and here is 950 ÷ √239= 61.5.1823

That is my standard error of this population.1839

And what I want to know is it is nice to have that but if it would also be nice to know what is the z score out here?1848

We use z score because we are using sigma.1856

What is the z score out here?1861

Actually I had just made you memorize it when we previously talked about confidence intervals so we know that is 1.96 and -1.96.1864

If you wanted to you could also figure it out by using either the table in the back of your book or Excel 1876

so we could put in normsin because we have the probability.1885

I want the two tailed probability this is actually one tailed.1890

The one tailed probability is going to be .025 way down here.1902

This little bottom part down here it is covered .025 of this and Excel is telling me that the z score right there is about 1.96.1910

Now that we have all of that settled, we could start tinkering with our actual sample. 1924

Let me draw some space here.1933

Let us talk about our sample.1938

When we talk about our sample we should figure out how far away is our sample mean?1942

We just do not want to know in terms of how far away they are in terms of friends but we want to know 1955

how far away in terms of the standard deviation because only standard deviation will tell us what proportion of the curve is colored.1962

Even if we find out the actual raw distance away 163, we do not know where that is in relation to this curve.1971

It would be nice if we could find the z score of 393 then we will know where it is in relation to this curve.1983

That would be 393 – 230 so how far is it away from 230, all divided by the standard error 61.5 1990

because that will give me how many standard errors away we are.2002

Let me just calculate that.2007

That would be 393 - 230 and I need parentheses because I need it to do the subtraction before the division and that gives me 2.65. 2011

My z score is 2.65.2032

Here this maybe 1 z score away, this is almost 2 z scores away and let us say this is 3 z scores away. 2036

I know that my 393 is somewhere around here because it is around 2.65.2049

This area is very tiny, so I need to find the p value here. 2061

What is the p value here?2070

What is the probability that x bar is greater than or equal to 393?2072

That equals the probability that z is greater than or equal to 2.65.2091

Not only that but remember we have a two tailed hypothesis.2100

We are interested in either being greater than or less than the mean.2106

We actually have to find this thing out and multiply it by 2.2112

What you can do is look this up in the back of your book and multiply it by 2 or Excel will actually calculate it for you 2117

like you could put in normsdist and put in the negative side because normsdist gives it to me going from the negative side to positive side.2128

I am going to color this part first.2143

-2.65 and it should be a very tiny number that will be .004.2144

That is a tiny number and then we take that one side and we multiply it by 2 to give us our p value.2153

What we are really doing is we are coloring this base, pretend that is inside and also getting -2.65 2160

and coloring that space and adding those two together.2179

That will give us .008.2183

What about a single sample hypothesis test when sigma is not available?2188

Well this is the exact same problem in fact I have crossed this out so you can no longer use it. 2201

It is no longer available to you.2208

Here what we have to do is estimate sigma and use s instead of sigma.2212

Let us go ahead and start off just hypothesis testing.2219

Our null hypothesis is mu=230 that are our sample of college students is just like everybody else. 2222

Our alternative is that they are different from everybody else. 2233

Different in some way, either have more friends or less friends.2239

We also need to pick a significance level.2244

How extreme does this x bar have to be?2248

We are going to pick alpha=.05 just by convention we do not figure it out or anything. 2255

And then we need to set our decision stage. 2260

Here we want to start off by drawing our SDOM helps to keep this in mind that this is a bunch of means, a bunch of x bars.2264

We can just use this information because this is our known population.2276

We are going to use that information to figure out our SDOM.2284

Here we run into the problem how can we figure out standard error?2288

Well, we cannot figure out sigma sub x bar but we can actually figure out s sub x bar.2294

That standard error using s instead of sigma.2302

That will be s(x) ÷ √n. 2307

We have s for more sample, the standard deviation of our sample which is 447 ÷ v239.2316

And I will just pull out my Excel in order to calculate this.2326

447 ÷ v239 and I get 28.9.2346

I am actually going to draw in my rejection regions, anything more extreme is going to be rejected.2356

Fail to reject in the middle and this rejection region is .025 and this rejection region is .025 because 2375

I need to split that significance level in 2.2389

What we do here is we want to figure out what is our actual t statistic?2393

How many standard errors we are when we talk about these borders?2404

What is our critical t?2408

That would be the t values here.2410

This is our raw values in terms of friends but we want to know it in terms of standard error.2413

Here are our t values so we cannot just put in 1.96 because that would be for z distributions.2418

We need a t distribution and in order to find a t distribution we need degrees of freedom.2426

The degrees of freedom is n-1 and that is 238 because 239 – 1.2434

You can either look this up in the back of your book or I am going to look this up on Excel.2443

Here I am going to use my t inverse and I put in my two tailed probability .05 and my degrees of freedom which is 238.2451

And I get 1.97.2465

1.97 and -1.97 because t distributions has many problems as they have they are perfectly symmetrical.2470

Those are critical t.2485

That is the boundary t values.2488

Now we have all of that, now we can start thinking about our sample. 2491

Let us think about our samples t and p value.2499

The sample t would be the distance that our sample is away from our mean ÷ standard error because we want how many standard errors away we are.2505

393 - 230 ÷ standard error 28.9.2523

I will put that into my Excel 393 – 230 ÷ 28.9 = 5.6.2532

Let us find the p value there.2546

We know that it is far out here our t value so this is about 2, 4, 5.6.2552

It is way out here.2560

Imagine this going all the way out here.2562

That is where x bar landed.2565

Already we know that it is pretty far out but let us find the precise p value there.2569

In order to find the p value we want to use t dist because that is going to give us the probability.2577

We put in the x and that is Excel's word for t.2583

When you see x here in t distribution just put in your t value and it only accepts positive t values.2588

I will just point to this one, our degrees of freedom which is 238 and how many tails?2600

We have a two tailed hypothesis.2609

We get 4.8 × 10-8 so that would be our p value.2612

Our probability of getting a t that is greater than or equal to 5.64 or t is less than or equal to -5.64 because it is two tailed equals 4.8 × 10-8.2624

Imagine .07 × 48 and so that is the pretty number.2658

This number is so small that they cannot even show you the decimal places.2669

It is super close to there but not 0.2677

This is our p value, is the p value less than .05?2680

Indeed it is.2686

What do we do?2688

We reject the null hypothesis.2691

This is what we do when sigma is not available.2695

Just to recap about alpha versus p value. 2702

P value is the probability of seeing that sample t or an even more extreme statistic given that the null hypothesis is true.2709

And we say extreme because they can be like way bigger or ways smaller either side right.2720

Alpha gives you the level of significance. 2729

That level of extremeness that you have to reach in order to reject your null.2733

This is the set standard.2739

And this is the thing that you are going to compare to that set standard. 2742

I want to talk briefly about one versus two-tailed hypotheses.2751

When we talk about a one tailed hypothesis, you might have something like mu is going to be greater than 0.2757

Or your alternative will be mu is less than 0.2768

If that is the case and your set alpha level is .05 then here is what you would do in your SDOM.2777

You will only use one side of it because you are not interested if your x values are way up here. 2786

You only care if your x value is way smaller than your population. 2798

In this case, you might set up this as your rejections zone and notice that it only on one side because one tailed and these are end tails.2805

That probability will be .05 and this failed to reject side will be .95.2817

This is a one tailed hypothesis. 2830

Frequently we will be dealing with two tailed hypotheses.2833

In that case that might be that you do not really care. 2838

We do not really care if mu is less than, way smaller or way bigger than what we expected. 2845

We just care if it is extreme in some way, different in some way.2854

We do not really care which way and that would be mu = 0 and the alternatives is that mu do not = 0.2858

If we had something like alpha = .05 in a two-tailed hypotheses then we would split up 2868

that rejection region into the two-tails so that will be .025 and .025. 2879

We reject , we reject, but inside of these boundaries we fail to reject and this is 90.95%.2889

Whatever p value you find we want to compare it to the set alpha level.2906

Let us talk about some examples.2915

Your chemistry text book says that if you dissolve table salt and water the freezing point will be lower than it is for pure water 32°f.2920

To test this theory, your school does an experiment with 15 teams of students dissolved salt and water and put them in the freezer with the digital thermometer.2931

Periodically checking to observe the temperature at which the solution freezes.2940

The data is shown in the download below. 2945

What can you conclude from this data?2948

If you look at your download and go to example 1, here are all my freezing temperatures that each of my teams got 2951

and I think there are only 14 teams here.2963

Let us suggest that to be 14. 2967

What should we do first?2969

Just to give you an example of what it is like to do one tailed hypothesis testing, let us have a one tailed test here.2973

Because it does say that putting the salt and water the freezing point should be lower 2982

that automatically gives us a direction that we expect, the freezing point to go in.2990

What would our null or default hypothesis be?2999

The default hypothesis would be that it is not different from pure water. 3004

They are the same.3010

It might be something like mu=32°f.3011

But do we care if our samples are all greater than 32°?3019

Maybe the freezing point is higher.3028

Do we really care about that?3032

No not really. 3035

Null hypothesis is really that we do not care if it is anything higher than or equal to 32°.3037

What we eventually want to know is it lower like weird in this low direction.3051

The alternative hypothesis is that it is weird, but in a particular direction that it is too low way lower than 32°.3058

Our Alpha is going to be .05, but let us make it clear that it is one tailed. 3071

Usually they do not say anything but most people assume two tails as the default.3079

Let us say one tailed.3086

Let us draw this SDOM for the decision stage and here is idea. 3088

The default is that all the samples come from a population with 32° is the mean of this SDOM but 3096

we want to know is it weird and a lot lower than that?3113

It is consistently lower than that.3126

That is our rejection region and that rejection region is going to be .05 because our fail to reject region is going to be .95.3128

Now that we have that it would be useful to know what our t statistic here.3144

This is raw in terms of degrees Fahrenheit.3150

We also want to know the t statistic.3156

Here at 0 what is the t statistics here that looks like boundary?3159

In order to know that we need to figure out a couple of things.3164

I will start with step 3, one of the things I want to know is that t statistics there.3168

In order to find that t statistics we need to know degrees of freedom for the sample and that is just account how many axis we have in our sample -1?3179

That is 13° of freedom. 3193

What is the t value there?3196

We have the probabilities and we want to know the critical t or boundary t.3199

In order to know that we need to use t in here it asks for a two tailed probability.3212

We need a one tailed hypothesis so we have to turn that into a two tail probability. 3221

If this was a two-tailed it would it be .1 and the degrees of freedom is 13.3228

It will only give you the positive side, but we could just turn it into -1 because it is perfectly symmetrical.3237

This critical t is -1.77.3248

Okay, now that we have that, we can start on step 4.3252

Step 4 deals with the sample t.3259

In order to find the sample t we probably need to find the mean of sample and that is average and we probably also need to know the standard error.3264

In order to find standard error what we need is s ÷ √n.3289

It is not like for Excel, this is just for me as I need to know s.3299

What is my s?3304

That would just be stdv in all of these.3308

Once I have that then I could calculate standard error s ÷ √n Which is 14.3314

We have a standard error, we have a mean, now we can find our sample t 3327

and that is going to be the mean of the sample - the hypothesized mu 32 ÷ the standard error.3334

I get -3.7645.3347

We know that this is much more extreme on the negative side than -1.77. 3354

We also need to find the p value. 3363

What is the p value there?3366

We need to use pdist because we do not know the probability there.3370

We put in our t value but remember Excel only accept positive one and I am only going to put so two – is +.3376

The degrees of freedom, which is 13 up here and how many tails?3390

Just one.3398

That is going to be .001 p value.3399

Since I have ran out of room I will just write the p value here so p = .001.3407

Is that p value smaller than this alpha?3416

Yes, indeed. 3420

What can we say?3421

We can reject the null.3424

What can I conclude from this data?3426

I can say that this data shows that it is very unlikely to come from the same population as pure water.3430

The freezing point of water will have a variation.3445

It will have some probability of not being exactly 32 and this deviation on the negative side is much greater than would be expected by chance.3449

Let us see.3461

Example 2, the heights of women in the United States are approximately normally distributed with a mean of 64.8 in.3465

The heights of 11 players on a recent roster of the WNBA team are these in inches.3472

Is there sufficient evidence to say that this sample is so much taller than the population that 3479

this difference cannot reasonably be attributed to chance alone?3485

Let us do some hypothesis testing.3489

Here our null hypothesis is that our sample is just like regular women.3493

The mean is 64.8. 3500

I am going to use a two tailed alternative here, is that they are not like this population.3504

We can probably guess by using common sense that they are on average taller, but we will do a two-tailed test.3514

It is actually more conservative. 3522

It is safer to go with that two tailed test.3525

Here we will make alpha=.05 and it will be two-tailed.3527

Let us draw the SDOM here.3536

Here we might draw these boundaries and because it is two tailed this is .025 .025 and here it is .95.3542

All together it adds up to .1. 3565

Now that we have this can we figure out the t?3568

In order to figure out the t, we need to have the degrees of freedom. 3575

If you go to the download and go to example 2, I have listed this data here for you and we can actually find the degrees of freedom here.3579

Here I put step 3 so that we know where we are.3590

In step 3, we need degrees of freedom and that would be count of all of these guys -1.3596

We have 11 players 10° of freedom.3606

Let us find the critical t. 3610

The critical t would be t inverse because we know the two tailed probability .05 and the degrees of freedom.3613

That gives us the positive critical t.3626

That is 2.23 and -2.23 those are our critical boundaries and anything outside of that, we reject the null. 3629

Let us go to step 4.3640

In step 4 we can start dealing with the sample. 3643

Let us figure out the sample t in order to do that we need the x bar - the mu ÷ standard error.3646

We need to know the samples average x bar. 3656

We also need to know mu and we also need to know standard error.3663

Standard errors is going to be s ÷ √n.3669

I need to write these things down because it helps me figure out what we need.3674

It is like a shopping list.3679

Here I need s.3680

Now that I have written all these things down I can just calculate them.3684

I need the average and mu which I already know from the problem 64.8.3688

I need to get my standard error but before I do that I need to get s standard deviation 3709

and 1 standard deviation I can take that and ÷ the square root of n which is 11.3718

That is my standard error and once I have all of these ingredients, I can assemble my t which is x bar – mu ÷ standard error.3730

I get 7.97 and that is way higher than 2.2.3746

I am pretty sure I can step 5, reject the null.3755

If I go back to my problem, then let me see is there is sufficient evidence to say that this sample is so much taller than the population, 3763

that this difference cannot be reasonably attributed to chance alone. 3776

I should say yes because when you are way out here, your probability that you belong to this chance distribution is small 3780

that it is reasonable for us to say that the sample came from a different population.3793

Final example, select the best way to complete the sentence.3802

The probability that the null hypothesis is true, that is a false alarm rate.3810

It is when the null hypothesis is true, but also it is not just that.3824

It is not just the possibility that the null hypothesis is true it is that given that you have a particular sample it seems to leave some information.3835

It is not quite complete, but it is not entirely false. 3850

It is just that it does not have the whole truth.3856

It does not have the condition.3859

Given that you have this particular sample value, the probability that the null hypothesis is false, that is not true.3861

Even if you just remember this.3870

Remember this column was null is true.3873

Alpha is the set one but the p ones are the ones in there.3877

That is just not true.3885

The probability that an alternative hypothesis is true.3889

Actually, we have not talked about that at all.3895

We only talked about having a very low possibility that the null hypothesis is true, 3898

but we have not talked about increasing the probability that the alternative hypothesis is true.3905

Beside why would you reject the null when you have a really small t value?3910

A small possibility that the alternative hypothesis is true that does not make sense.3915

What about the probability of seeing a sample t as extreme as the one given that the null hypothesis is true. 3921

This is our entire story I can process it now.3934

It is not just that the null hypothesis is true, but it also that when you have a certain sample, that also has to be part of the definition of p value. 3938

The idea is if we have this t value and it is pretty extreme and the null hypothesis is true.3956

That is given.3967

Given that the null hypothesis is true, what is the possibility of seeing such extreme t value?3968

It is very small.3979

We are trying to lower our false alarm rate.3981

That is the end of one sample hypothesis testing.3986

Hi and welcome to www.educator.com.0000

Today we are going to talk about confidence intervals for the difference of two independent means.0002

It is pretty important that there are for independent means because later we are going to go to non-independent or error means.0007

We have been talking about how to find confidence intervals and hypothesis testing for one mean.0013

We are going to talk about what that means for how we go about doing that for two means.0023

We are going to talk about what two means means?0029

We are going to talk a little bit about mu notation and we are going to talk about sampling distribution of the difference between two means.0032

I am going to shorten this, this is just means this is not like official or anything as SDOD 0041

because it is long to say assembling distribution of the difference between two means, but that is what I mean.0048

We will talk about the rules of the SDOD and those are going to be very similar to the CLT (the central limit theorem) with just a few differences.0055

Finally, we all set it all up so that we can find and interpret the confidence interval.0066

One mean versus two means.0075

So far we have only looked at how to compare one mean against some population, but that is not usually how scientific studies go.0081

Most scientific studies involve comparisons.0091

Comparisons either between different kinds of water samples or language acquisition for babies versus babies who did not.0093

Scores from the control group versus the experimental group.0102

In science we are often comparing two different sets of the two different samples.0106

Two means really means two samples.0112

Here in the one mean scenarios we have one sample and we compare that to an idea in hypothesis testing 0120

or we use that one sample in order to derive the potential population means.0132

But now we are going to be using two different means.0140

What do we do with those two means?0143

Do we just do the one sample thing two times or is there a different way?0145

Actually, there is different and more efficient way to go about this.0152

Two means is a different story.0155

They are related but different story.0159

In order to talk about two means and two samples, we have to talk about some new notation.0162

This is totally arbitrary that we use x and y.0170

You could use j and k or m and n, whatever you want.0176

X and y is the generic variables that we use.0182

Feel free to use your favorite letters. 0189

One sample will just be called x and all of its members in the sample will be x sub 1, x sub 2, x sub 3.0191

When we say x sub I, we are talking about all of these little guys.0203

The other sample we do not just call it x as well because we will get confused. 0208

We cannot call it x2 because x sub 2 has a meaning.0216

What we call it is y.0221

Y sub i now means all of these guys.0224

We could keep them separate.0229

In fact this x and y is going to follow us from here on out.0232

For instance when we talk about the mean of x we call it the x bar.0236

What would be the mean of y?0241

Maybe y bar right. 0243

That makes sense.0246

And if you call this b, this will be b bar.0247

It just follows you. 0253

When we are talking about the difference between two means we are always talking about this difference. 0256

That is going to be x bar - y bar. 0264

Now you could also do y bar - x bar, it does not matter.0267

But definitely mean by the difference between two means.0271

We could talk about the standard error of all whole bunch of x bars, standard error of x, standard error of y.0274

You could also talk about the variance of x and the variance of y.0285

You can have all kinds of thing they need something to denote that they are little different.0292

That standard error of x sort and another way you could write it is that we are not just talking about standard error.0298

When we say standard error, you need to keep in mind if we double-click on it that means the standard deviation of a whole bunch of means.0312

Standard deviation of a whole bunch of x bars.0322

Sometimes we do not have sigma so we cannot get this value.0328

We might have to estimate sigma from s and that would be s sub x bar.0334

If we wanted to know how to get this that would just be s sub x.0345

Notice that is different from this, but this is the standard error and this is the actual standard deviation of your sample ÷ √n.0353

Not just n the n of your sample x.0367

In this way we could perfectly denote that we are talking about the standard error of the x, the standard deviation of the x, and the n(x).0372

You could do the same thing with y.0387

The standard error of y, if you had sigma, you can just call it sigma sub y bar because it is the standard deviation of a whole bunch of y bars.0390

Or if you do not have sigma you could estimate sigma and use s sub y bar.0402

Instead of just getting the standard deviation of x we would get the standard deviation of y and divide that by √n Sub y.0411

It makes everything a little more complicated because now I have to write sub x and sub y after everything.0423

But it is not hard because the formula if you look remains exactly the same.0430

The only thing that is different now is that we just add a little pointer to say we are talking 0438

about the standard deviation of our x sample or standard deviation of our y sample.0446

Even this looks a little more complicated, deep down at the heart of the structure it is still the standard error equals standard deviation of the sample ÷√n.0452

Let us talk about what this means, the sampling distribution of the difference between two means. 0466

Let us first start with the population level.0477

When we talk about the population right now we do not know anything about the population.0480

We do not know if it is uniform, the mean, standard deviation.0491

Let us call this one x and this one y.0500

From this x population and this y population we are going to draw out samples and 0507

create the sampling distribution and that is the SDOM (the sampling distribution of the mean).0514

Here is a whole bunch of x bars and here is a whole bunch of y bars.0522

Thanks to the central limit theorem if we have big enough n and all that stuff then we know that we could assume normality.0530

Here we know a little bit more than we know about the population.0540

We know that in the SDOM, the standard error, I will write s from here because 0545

we are basically going to assume real life examples when we do not have the population standard deviation.0557

The only time we get that is like in problems given to you in statistics textbook.0565

We will call it s sub x bar and that can be the standard deviation of x/√n sub x.0570

We know those things and we also know the standard error of y and that is going to be the standard deviation of y ÷ √n sub y.0585

Because of that you do not write s sub y again because that would not make sense that 0601

the standard error would equal the standard error over into something else.0607

That would not quite make sense. 0612

You want to make sure that you keep this s special and different because standard error 0614

is talking about entirely different idea than the standard deviation.0621

Now that we have two SDOM if we just decided to do this then we would not need to know anything new about creating a confidence interval of two means.0625

You what just create two separate confidence intervals like you consider that x bar, 0638

consider that y bar, construct a 95% confidence interval for both of these guys.0644

You are done.0649

Actually what we want is not a sampling distribution of two means and get two sampling distributions.0650

We would like one sampling distribution of the difference between two means.0661

That is what I am going to call SDOD.0668

Here is what you have to imagine, in order to get the SDOM what we had to do is go to the population and draw out samples of size n and plot the means.0671

Do that millions and millions of times.0682

That is what we had to do here.0685

We also have to do that here, we want the entire population of y pulled out samples and plotted the means until we got this distribution of means.0687

Imagine pulling out a mean from here randomly and then finding the difference of those means and plotting that difference down here.0699

Do that over and over again.0715

You would start to get a distribution of the difference of these two means. 0718

You would get a distribution of a whole bunch of x bar - y bar.0727

That is what this distribution looks like and that distribution looks normal. 0734

This is actually one of the principle of probability distributions that we have covered before.0742

I think we have covered it in binomial distributions.0747

I know this is not a binomial distribution but the same principles apply here where if you draw from two normally distributed population0749

and subtract those from each other you will get a normal distribution down here.0764

We have this thing and what we now want to find is not just the mu sub x bar or mu sub y bar, that is not what we want to find.0769

What we want to find is something like the mu of x bar - y bar because this is our x bar - y bar and we want to find the mu of that.0783

Not only that but we also want to find the standard error of this thing.0796

I think we can figure out what that y might be.0800

At least the notation for it, that would be the standard error.0807

Standard error always have these x bar and y bar things.0812

This is how you notate the standard deviation of x bar - y bar and that is called 0817

the standard error of the difference and that is a shortcut way of saying x bar - y bar. 0829

We could just say of the difference.0837

You can think of this as the sampling distribution of a whole bunch of differences of means. 0839

In order to find this, again it draws back on probability principles but actually let us go to variance first.0845

If we talk about the variance of this distribution that is going to be the variance of x bar + the variance of y bar.0856

If you go back to your probability principles you will see why.0869

This from this we could actually figure out standard error by square rooting both sides.0874

We are just building on all the things we have learned so far. 0881

We know population. 0888

We know how to do the SDOM.0889

We are going to use two SDOM in order to create a sampling distribution of differences.0891

Let us talk about the rules of the SDOD and these are going to be very, very similar to the CLT.0898

The first thing is this, if SDOM for x and SDOM for y are both normal then the SDOD is going to be normal too.0909

Think about when these are normal?0919

These are normal if your population is normal.0922

That is one case where it is normal.0924

This is also normal when n is large.0927

In certain cases, you can assume that the SDOM is normal, and if both of these have met those conditions, 0929

then you can assume that the SDOD is normal too.0939

We have conditions where we can assume it is normal and they are not crazy. 0942

There are things we have learned.0949

What about the mean?0951

It is always shape, center, spread.0953

What about the mean for the SDOD?0956

That is going to be characterized by mu sub x bar - y bar.0959

That is the idea.0972

Let us consider the null hypothesis and in the null hypothesis usually the idea is they are not different like nothing stands out.0975

Y does not stand out from x and x does not stand out from y.0987

That means we are saying very similar.0991

If that is the case we are saying is that when we take x bar – y bar and do it over and over again, on average, the difference should be 0.0994

Sometimes the difference will be positive. 1009

Sometimes the difference will be negative.1012

But if x and y are roughly the same then we should actually get a difference of 0 on average.1014

For the null hypothesis that is 0.1022

The so what would be the alternative hypothesis?1027

Something like the mean of the SDOD is not 0. 1031

This is in the case where x and y assume to be same.1037

That is always with the null hypothesis.1051

They assume to be the same. 1055

They are not significantly different from each other.1056

That is the mean of the SDOD.1058

What about standard error?1062

In order to calculate standard error, you have to know whether these are independent samples or not.1064

Remember to go back to sampling, independent samples is where you know that these two 1073

come from different populations and the picking one does not change the probabilities of picking the other.1079

As long as these are independent samples, then you can use these ideas of the standard error. 1089

As we said before, it is easier when I think about the variance of the SDOD first because that rule is quite easy.1096

The variance of SDOD, so the variance is going to be just the variance of the SDOM + the variance of the SDOM for the other guy.1105

And notice that these are the x bars and the y bars.1121

These are for the SDOM they are not for the populations nor the samples.1131

From here what you can do is sort of justice derive the standard error formula.1137

We can just square root both sides.1149

If you wanted to just get standard error, then it would just be the square root of adding each of these variances together.1153

Let us say you double-click on this guy, what is inside of him?1168

He is like a stand in for just the more detailed idea of s sub x / n sub x.1175

Remember when we talk about standard error we are talking about standard error = s / √n.1193

The variance of the SDOM =s2 /n.1205

If you imagine squaring this you would get s/n but we need the variance.1210

We need to add the variances together before you square root them.1220

Here we have the variance of y / n sub y.1224

You could write it either like this or like this.1235

They mean the same thing. 1240

They are perfectly equivalent.1242

You do have to remember that when you have this all under the square root sign, 1244

the square root sign acts like a parentheses so you have to do all of this before you square root.1253

That is standard error.1261

I know it looks a little complicated, but they are just all the principles we learned before, 1265

but now we have to remember does it come from x or does come from y distributions.1273

That is one of the few things you have to ask yourself whenever we deal with two samples.1279

Now that we know the revised CLT for this sampling distribution of the differences, 1287

now we need to ask when can we construct a confidence interval for the difference between two means?1298

Actually these conditions are very similar to the conditions that must be met when we construct an SDOM.1306

There are a couple of differences because we are dealing with two samples.1314

The three conditions have to be met.1318

All three of these have to be checked.1321

One is independence, the notion of independence. 1323

The first is this, the two samples we are randomly and independently selected from two different populations.1329

That is the first thing you have to meet before you can construct this confidence interval.1340

The second thing is this, this is the assumption for normality.1348

How do we know that the SDOD is normal. 1355

It needs to be reasonable to assume that both populations that the sample comes from the population are normal or your sample size is sufficiently large.1358

These are the same ones that apply to the CLT.1372

This is the case where we can assume normality for the SDOM but also the SDOD.1376

In number 3, in the case of sample surveys the population size should be at least 10 times larger than the sample size for each sample.1384

The only reason for this is we talked before about replacement, a sampling with replacement versus sampling not with replacement.1397

Well, whenever you are doing a sample you are technically not having replacement 1409

but if your population is large enough then this condition actually makes it so that you could assume that it works pretty much like with replacement.1413

If you have many people then it does not matter.1427

That is the replacement rule.1430

Finally, we could get to actually finding the confidence interval.1433

Here is the deal, with confidence interval let us just review how we used to do it for one mean.1444

One mean confidence interval.1450

Back in the day when we did one mean and life was nice and what we would do is often take the SDOM 1455

and assume that the x bar, the sample mean is at the center of it and then we construct something like 95% confidence interval.1466

These are .025 because if this is 95% and symmetrical there is 5% leftover but it needs to be divided on both sides.1484

What we did was we found these boundary values by using this idea, this middle + or – how many standard errors you are away.1496

We used either t or z.1525

I’m just going to use t from now on because usually we are not given the standard deviation of the population × the standard error.1529

That was the basic idea from before and that would give us this value, as well as this value.1530

We could say we have 95% confidence that the population mean falls in between these boundaries.1537

That is for one mean.1545

What about two means?1548

In this case, we are not going to be calculating using the SDOM anymore.1549

We are going to use the SDOD.1560

If this mean is going to be x bar, this sample mean then you can probably assume that 1562

it might be something as simple as a difference between the two means.1575

That is what we assume to be the center of the SDOD.1580

Just like before, whatever level of confidence you need.1583

If it is 99% you have 1% left over on the side.1593

You have to divide that 1% in half so .5% for the side and .5% for that side.1598

In this case, let us just keep the 95%.1603

What we need to do is find these borders.1611

What we can to just use the exact same idea again.1618

We could use that exact same idea because we can find the standard error of this distribution.1624

We know what that is.1629

Let me write this out.1631

We will write s sub x bar.1640

We can actually just translate these ideas into something like this. 1645

That would be taking this, adding or subtracting how many jumps away you are, like the distance you are away.1652

That would be something like x bar - y bar but instead of just having x in the middle we have this thing in the middle.1661

+ or – the t remains the same, t distributions but we have to talk about how to find degrees of freedom for this guy.1670

The new SE, but now this is the SE of the difference.1680

How do we write that?1691

X bar - y bar + or - the t × s sub x bar = y bar.1694

If we wanted to we could take all that out into the square root of variance of the SDOM for x and variance of SDOM for y.1707

We could unpack all of this if we need to but this is the basic idea of the confidence interval of two means.1719

In order to do this I want you to notice something.1727

Here we need to find t and because we need to find t we need to find degrees of freedom 1732

but not just any all degrees of freedom because right now we have 2 degrees of freedom. 1740

Degrees of freedom for x and degrees of freedom for y.1744

We need a degrees of freedom for the difference.1747

That is what we need.1751

Let us figure out how to do that.1753

We need to find degrees of freedom.1756

We know how to find degrees of freedom for x, that is straightforward. 1760

That is n sub x -1 and degrees of freedom for y is just going to be n sub y -1.1764

Life is good.1771

Life is easy.1772

How do we find the degrees of freedom for the difference between x and y?1773

That is actually going to just be the degrees of freedom for x + degrees of freedom for y.1778

We just add them together.1790

If we want to unpack this, if you think about double-clicking on this and get that.1792

N sub x - 1 + n sub y -1.1797

I am just putting that parentheses as you could see the natural groupings but obviously you could 1804

do them in any order because you could just do them straight across this adding and subtracting. 1810

They all have the same order of operation.1816

That is degrees of freedom and once you have that then you can easily find the t.1820

Look it up in the back of your book or you can do it in Excel.1830

Let us interpret confidence interval. 1833

We have the confidence interval let us think about how to say what we have found.1837

I am just going to briefly draw that picture again because this picture anchors my thinking.1844

Here is our difference of means.1852

When you look at this t, think of this as the difference of two means.1858

I guess I could write DOTM but that would just be DOM.1863

Here what we found, if we find something like a 95% confidence interval that means we have found these boundaries.1869

We say something like this. 1887

The actual difference of the two means of the real population, of the population x and y.1891

The real population that they come from should be within this interval 95% of the time or something like 1919

we have 95% confidence that the actual difference between means of the population of x and population of y should be within this interval.1939

That comes from that notion that this is created from the SDOM.1950

Remember the SDOM, the CLT says that their means or the means of the population.1955

We are getting the population means drop down to the SDOM and from the SDOM we get this.1962

Because of that we could actually make a conclusion that goes back to the population.1970

Let us think about if 0 is not in between here.1980

Remember the null hypothesis when we think about two means is going to be something like this.1987

That the mu sub x bar – y bar is going to be equal to 0. 1993

This is going to mean that on average when you subtract these two things the average is going to be 0.1998

There is going to be no difference on average.2004

The alternative hypothesis should then be the mean of these differences should not be 0.2006

They are different.2015

If 0 is not within this confidence interval then we have very little reason to suspect that this would be true.2016

It is a very little reason to think that this null hypothesis is true.2026

We could also say that if we do not find 0 in our confidence interval that we might in my hypothesis testing be able to also reject the null hypothesis.2030

But we will get to that later.2040

I just wanted to show you this because the confidence interval here is very tightly linked to the hypothesis testing part.2042

They are like two side of the same coin.2050

That universe is fairly straightforward but I feel like I need to cover one other thing because sometimes this is emphasized in some books.2052

Some teachers emphasize this over other teachers and so I'm going to talk to you about SPOOL because this will come up.2065

One of the things I hope you noticed was that in order to find our estimate of SDOM, 2076

in order to find the SDOD sample error what we did was we took the variance of one SDOM 2085

and added that to the variance of the other SDOM and square root the whole thing.2106

Let me just write that here. 2110

The s sub x bar - y bar is the square root of one the variances + the variance of the other SDOM.2111

Here what we did was let us just treat them separately and then combine them together.2129

That is what we did.2137

Although this is an okay way of doing it, in doing this we are assuming that they might have different standard deviations.2138

The two different populations might have two different standard deviations.2154

Normally, that is a reasonable assumption to make.2159

Very few populations have the exact standard deviation.2162

For the vast majority of time because we just assumed if you come from two different population you probably have two different standard deviations.2166

This is pretty reasonable to do like 98% of the time.2177

The vast majority of time.2182

But it is actually is not as good as the estimate of this value then, if you had just used up a POOL version of the standard deviation.2184

Here is what I mean.2198

Now we are saying, we are going to create the standard deviation of x.2198

You are going to be what we used to create the standard deviation of y.2206

Just of not make that explicit.2210

I am going to write this out so that you could actually see the variance of x and the variance of y.2213

We use x to create this guy and we use y to create that guy and they remain separate. 2228

This is going to take a little reasoning.2235

Think back if you have more data then your estimate of the population standard deviation is better, more data more accurate. 2239

Would not it be nice if we took all the guys from the x pool and all the guys from the y pull and put them together.2253

Together let us estimate the standard deviation.2262

Would not that be nice?2267

Then we will have more data and more data should give us a more accurate estimate of the population.2268

You can do that but only in the case that you have reason to think that the population of x has a similar standard deviation to the population of y.2278

If you have a reason to think they are both normally distributed.2293

Let us say something like this.2299

If you have reason to believe that the population x and y have similar standard deviation 2303

then you can pull samples together to estimate standard deviation.2324

You can pull them together and that is going to be called spull.2347

There are very few populations that you can do this for.2351

One thing something like height of males and females, height tends to be normally distributed and we know that.2357

Height of Asians and Latinos or something, but there are a lot of examples that come to mind where you could do this.2365

That is why some teachers do not emphasize it but I know that some others do so. 2374

That is why I want to definitely go over it. 2378

How do you get spull and where does it come in?2380

Here is the thing, in order to find Spull, what we would do is we would substitute in spull for s sub x and s sub y.2384

Instead of two separate estimates of standard deviations use Spull.2396

We will be using Spull2.2408

How do we find Spull2?2411

In order to find Spull2, what you would do is you would add up all of the sum of squares.2415

The sum of squares of x and sum of squares of y, add them together and then divide by the sum of all the degrees of freedom.2432

If I double-click on this, this would mean the sum of squares of x + the sum of squares of y ÷ degrees of freedom x + degrees of freedom y.2442

This is what you need only to do in order to find Spull and then what you would do is substitute in s(x)2 and s sub y2.2457

That is the deal.2469

In the examples that are going to follow, I am not going to use Spull because there is very little reason usually to assume that we can use Spull.2471

And but a lot of times you might hear this phrase assumption of homogeneity of variance.2483

If you could assume that these guys have a similar variance, if you can assume 2490

they have similar homogeneous variance then you can use Spull.2502

For the most part, for the vast majority of time you cannot assume homogenous variance.2508

Because of that we will often use this one. 2514

However, I should say that some teachers do want you to be able to calculate both.2517

That is the only thing.2525

Finally I should just say one thing. 2528

Usually this works just as well as pull.2531

It is just that there are sometimes we get more of a benefit from using this one.2536

If worse comes to worse, and after the statistics class you are only remember this one.2543

If not all you are pretty good to go.2548

Let us go on to some examples.2551

A random sample of American college students was collected to examine quantitative literacy.2556

How good they are in reasoning about quantitative ideas.2562

The survey sampled 1,000 students from four-year institutions, this was the mean and standard deviation.2565

800 from two-year institutions, here is the mean and standard deviations.2571

Are the conditions for confidence intervals met?2576

Also construct a 95% confidence interval and interpret it.2581

Let us think about the confidence interval requirements.2586

First is independent random samples.2593

It does say random sample right and these are independent populations.2596

One is for your institutions, one is to your institutions. 2603

There are very few people going to both of them at the same time.2606

First one, check.2609

Second one, can we assume normality either because of the large n or because we know that both these populations are originally normally distributed?2612

Well, they have pretty large n, so I am going to say number 2 check.2622

Number 3, is this sample roughly sampling with replacement?2627

And although 1000 students seem a lot, there are a lot of college students.2635

I am pretty sure that this meets that qualification as well.2640

Go ahead and construct the 95% confidence interval.2643

Well, it helped to start off with the drawing of SDOD just to anchor my thinking.2648

And this mu sub x bar - y bar we could assume that this is x bar - y bar.2656

That is what we do with confidence intervals. 2667

We use what we have from the samples to figure out what the population might be.2670

We want to construct a 95% confidence interval.2678

That is going to be .025 and then maybe it will help us to figure out the degrees of freedom so that we will know the t value to use.2685

Let us figure out degrees of freedom.2703

It is going to be the degrees of freedom for x and I will call x the four-year university guys and the degrees of freedom for y the two-year university guys.2706

That is going to be 999 + 799 and so it is going to be 1800 - 2 = 1798.2718

We have quite large degrees of freedom and let us find the t for this place.2747

We need to find is this and this.2755

Let us find the t first. 2760

This is the raw score, this is the t, and let me delete some of the stuff.2765

I will just put x bar - y bar in there and we can find that later.2772

The t is going to be the boundaries for this guy and the boundaries for this guy.2782

What is our t value?2788

You can look it up in the back of your book or you could do it in Excel.2790

Here we want to put in the t in because we have the probability and remember this one 2799

wants two tailed probability .05 and the degrees of freedom which is 1798 = 1.896.2806

We will put 1.961 just to distinguish it.2819

Let us write down our confidence interval formula and see what we can do.2831

Confidence interval is going to be x bar - y bar.2838

The middle of this guy + or - t × standard error of this guy.2844

That is going to be s sub x bar - y bar.2854

It would be probably helpful to find this thing.2858

X bar - y bar.2862

X bar - y bar that is going to be 330 – 310.2868

Let us also try to figure out the standard error of SDOD which is s sub x bar - y bar.2883

What I'm trying to do is find this guy.2911

In order to find that guy let us think about the formula. 2918

I'm just writing this for myself. 2921

The square root of the variance of x bar + the variance of y bar .2925

We do not have the variance of x bar and y bar.2937

Let us think about how to find the variance of x bar.2943

The variance of x bar is going to be s sub s2 ÷ n sub x.2947

The variance of y bar is going to be the variance of y2 ÷ n sub y.2959

I wanted to write all these things out just because I need to get to a place where finally I can put in s.2977

Finally, I can do that.2986

This is s sub x and this is s sub y.2988

I can put in 1112 ÷ n sub x which is 1000 and I could put in the standard deviation of y2 ÷ 800.2990

I have these two things and what I need to do is go back up here and add these and square root them.3017

Square root this + this.3028

I know that this equal that.3034

We have our standard error, which is 4.49 and this is 20 + or - 1.961. 3038

Now I could do this.3064

I will going to take that in my calculator as well.3066

The confidence interval for the high boundary is going to be 20 + 1.961 × 4.49 3069

and the confidence interval for the low boundary is going to be that same thing.3085

I am just going to change that into subtraction.3097

11.20.3101

Let me move this over.3105

It is going to be 28.8.3110

Let me get the low end first.3117

The confidence interval is from about 11.2 through 28.8.3121

We have to interpret it.3127

This is the hardest part for a lot of people.3130

We have to say something like this.3133

The true difference between the population means 95% of the time is going to fall in between these two numbers.3136

Or we have 95% confidence that the true difference between the two population means fall in between these two numbers.3146

Let us go to example 2.3154

This will be our last example.3157

If the sample size of both samples are the same, what would be the simplified formula for standard error of the difference?3159

If in addition, the standard deviation of both samples are the same, what would be the simplified formula for standard error of the difference?3167

This is just asking depending on how similar the two examples are can we simplify a formula for standard error.3175

We can.3183

Let us write the actual formula out so that would just x bar – y bar = square root of the variance of x bar + variance of y bar.3184

If we double-click on these guys that would give the variance of x / n sub x + the variance of y / n sub y.3207

It is asking, what if the sample size for both samples are the same?3223

What would be the simplified formula?3230

That is saying that if n sub x = n sub y then what would be this?3231

We can get the variance of x + variance of y / n.3240

Because the n for each of them should be the same.3251

This would make it a lot simpler.3254

If in addition a standard deviation of both samples are the same right then this would mean that 3260

because the standard deviation is the same then the variances are the same.3272

That would be that case.3276

If in addition this was the case, then you would just get 2 × s2 whatever the equal variances /n.3279

That would make it a simple formula.3294

That would make life a lot easier but that is not always the case.3298

If it is you know that it will be simple for you. 3303

That is it for the confidence intervals for the difference between two means.3307

Thank you for using www.educator.com.3312

Hi and welcome to www.educator.com.0000

We are going to be talking about hypothesis testing for the difference between two independent means.0001

We are going to go over the goal of hypothesis testing in general.0005

We have only looked at it for one means so far, but we are going to look at 0012

how it changes just very suddenly when we talk about two means.0015

We are going to re-talk about the sampling distribution of the difference between two means.0019

You have just watched the confidence interval for two means, then you do not need to watch this one.0025

You do not need to watch that section.0032

We are going to talk about the same conditions for doing hypothesis testing as first confidence interval.0034

They need to meet three conditions before you could do either of these two.0043

When we talk about the modified steps of hypothesis testing for two means and the formulas that go with those steps.0047

Let us talk about the goal of hypothesis testing.0055

In one sample what we wanted to do was reject the null if 0060

we got a sample that was significantly different from the hypothesized mu.0065

For instance, significantly lower or significantly higher.0073

A significant does not mean important like it does in our modern use of the word. 0076

It actually means does it standout?0083

Is it weird enough?0086

Does it stand out from the hypothesized mu?0088

In those cases we reject the null.0091

Our goal is to reject the null. 0095

We can only say whether something is sufficiently weird w cannot say whether it is sufficiently similar.0097

Experiment is actually a success if they reject the null.0106

If they do not reject the null it is considered a null experiment or what we think of as uninformative which is not actually true. 0110

That is how traditionally is that.0118

This is the case where we only have one sample and we have a hypothesized population. 0123

Here we have two samples and in order to reject the null we need to get samples that are significantly different from each other.0130

They stand out from each other so x is different from y, y is different from x.0144

That is what we are really looking for.0151

Once again, just like the one sample, we cannot say whether they are sufficiently similar, 0154

but we can say whether they are sufficiently different.0159

It is okay if x is significantly lower than y or significantly higher.0163

We do not really care.0170

We just care about significantly different.0171

If you do not care about which direction these are called two-tailed hypotheses. 0173

Let us think if x and y are different from each other then x - y should not be 0. 0179

But if x and y are exactly the same, x = y then x – y =0.0189

Because you can think about this as x – x because x – y.0196

If you want to think about it algebraically even if you add y to each side you would get perfectly x= y.0201

If x and y were the same, we should expect their difference to be 0.0211

Let us just review very briefly the sampling distribution of the difference between two means.0218

This is the case where we do not know what the population is like, 0228

but because of the CLT we actually end up knowing quite a bit about the SDOM.0233

This is x the population of x and population of y.0242

This is the SDOM of x bar, so the whole bunch of x bars and this is the SDOM for y which is a whole bunch of y bars.0247

We know some things about these guys and we also know we can figure out the standard error from the sample.0258

What is nice about this if we do not need to know anything about the population. 0280

All we have to do is know the standard deviation of the sample which we could easily calculate 0284

in order to estimate the standard error of these two populations. 0288

Once we have that now we can start talking about the SDOD (the sampling distribution of the difference between means).0294

What we want to do is instead of finding mu sub x or mu sub y, we want to know mu sub x bar – y bar.0306

Here you have to think of pulling out one sample from here and one sample from here getting the difference and plotting it.0322

If these guys are normal, we can assume this one to be normal.0332

Not only that but we can figure out the standard error of this guy as well just 0336

from knowing these because the standard error is going to be square roots of s sub x2.0342

The variance of s/n sub x + variance of y/ n sub y.0357

These are all things that we have.0366

We do not need anything special.0368

We do not need sigma or anything like that.0370

We just need samples in order to calculate this.0372

If these two distributions or if these two distributions, the population distribution, 0374

if we have a reason to suspect that these have homogeneous variance.0384

If their variances are the same then instead of s sub s2 and s sub y2, 0389

we can actually use spull2 but we would not be doing that in this lesson, but you can.0395

Remember the rules of the SDOD are very similar to the CLT and if the SDOM for x is normal 0405

and SDOM for y is normal then SDOD is normal too.0415

There is two ways that this could be true. 0419

The first way is if populations are normal.0421

If population of x and y are normal then we could assume SDOM for x and y are normal.0428

Or are your other possibility is if n is large enough.0435

We want to talk about the mean for the null hypothesis.0443

The null hypotheses is saying that the population of x and population of y, 0450

the difference between them is going to be 0 because they are similar.0457

The null hypotheses is saying both are similar, which means that the means of 0461

the sampling distribution of the means, the SDOM means is going to be similar.0467

Which means that is strap in and will give us 0.0474

The null hypothesis says the mean of these differences of means it is going to be 0.0478

That is the null hypotheses and that is really saying that the SDOM for x and SDOM for the y are very similar.0486

Let us talk about standard error for independent samples.0497

Remember, we are still talking just about independent samples.0502

When variance is homogenous that is only used as Spull idea.0506

That means that x sub x bar - y bar is going to be equal to and pretend you are 0511

writing just the regular idea where you are dividing by n sub x and n sub y.0521

Instead of using the variance from x and the variance from y, we are going to use that pulled variance idea.0529

That is going to be s pulled. 0536

Some people think why do we just put that on top and put n sub x and n sub y at the bottom?0547

That will be algebraically wrong because remember, these are the denominators we would have 0554

to have common denominators in order for us to put these together and we do not have common denominators yet.0559

What about in the case where variance is not homogenous and this is the vast majority of time and when in doubt, 0565

when you do not know anything about the variance of the population go with this one. 0576

It is just a safer option. 0582

This is going to mean that this standard error is represented by the variance of x /n + variance of y /n.0584

Add these together and square the whole thing.0602

Just to recap, same conditions must be met in order to do hypothesis testing 0605

for two means as the conditions for doing a confidence interval for two means.0616

It is that the two samples were randomly and independently selected from two different populations, 0622

it is reasonable to assume that both populations that the sample come from are 0632

normally distributed or the sample sizes are sufficiently large. 0636

This was to ensure the normality of the SDOM.0641

Also in the case of the sample surveys, the population size should be at least 10 times larger than the sample size for each sample.0643

That is just assume so that we could assume replacement because probability actions change when you do not assume replacement.0651

Let us go in the steps of the hypothesis testing.0663

These are the same steps as you did when you have one mean, except now that we are subtly changing a few things.0669

I'm going to highlight those changes as we go through this.0677

First we need to state our hypotheses and remember now instead of having just the hypotheses that 0679

the mean of the population equals this, what we are saying is that the mean of x,0686

population of x and the mean of the population of y those are the same. 0696

Mu sub x - y will be 0.0701

You can also write it as mu sub x = mu sub y.0707

The alternative is that they are different from each other in some way.0712

Then we pick a significance level. 0718

How different do these two populations have to be for us to say they are different?0721

We set a decision stage, but instead of drawing the SDOM now we draw the SDOD.0726

Because now we are looking at the differences between these to means. 0734

We identify critical limits and rejection regions. 0739

We also find the critical test statistic, the boundaries.0743

In order to do this we have to find the degrees of freedom for the difference.0747

We cannot just use the degrees of freedom for 1, degrees of freedom for the other but we actually add them together.0753

And then use the samples and the SDOD to compute the mean difference.0759

We are not just computing mean, but we are computing mean difference test statistics, as well as the p value.0764

And then we compare the sample to the hypothesized population.0773

We either reject the null or not.0779

We reject the null if our test statistic and p value lie in those zones of rejection.0781

It is like these are the weirdo zone.0792

This is all we know that our sample is really different from this population. 0794

Let us talk about the different formulas that go along with these steps.0799

Remember the first step is going to be, what is the hypothesis, the null hypotheses, as well as the alternative.0806

This is not really a formula, but it is helpful to remember that this is what we really mean versus x bar – y bar does not equal 0. 0817

This is often what is going to be the case and you can rewrite this as mu sub x bar – mu sub y bar sometimes, 0836

but there are some mathematical ideas that you have to learn before you can write that.0846

I will leave that aside for now. 0857

Second thing is significance level.0859

Here there are no formulas but you should know that when we say alpha= .05 we are talking about that false alarm rate.0862

This is the rate of rejecting the null when the null is actually true.0873

This is a very low rate of false alarms.0877

When we say alpha = .05 it is not that we calculated it but it is just that 0881

by convention science tends to say this is the reasonable level of significance.0887

Sometimes people are more conservative than 1.0 or 1.001.0895

Number 3, we need to set that decision stage.0900

It is helpful to draw the SDOD and it is helpful to have our hypothesized population here. 0905

Mu sub x bay – y bar = 0.0924

We assume that this point is 0.0930

One thing you probably also want to know about the SDOD is the formula for standard error. 0932

The formula for standard error of the SDOD we written this a lot of times, 0941

is the variance of x / n sub x + the variance of y / n sub y.0951

Another thing, you probably want to know is that we need to find these critical t.0959

We need to find the t values here and in order to find that you will need to know 0965

the degrees of freedom for the difference and it is pretty easy. 0973

It is the degrees of freedom for x + the degrees of freedom for y.0979

To find this, it is n sub x -1.0983

To find that it is n sub y -1.0988

We could write this as n sub x -1 + n sub y -1.0990

You could write it like that and then I think that is all you need to know for the decision stage.1002

Step 4, if you have to compute the samples mean difference you need to calculate its test statistic as well as its p value. 1011

Remember we are going to be using t from here on out because obviously we are using s instead of sigma.1039

Let us talk about how to come to the sample t.1046

Let me write this as sample t.1050

The sample t is really the distance between where our sample differences versus the hypothesized difference.1058

We do not want it just in terms of that raw distance, we want in terms of the standard error.1069

It is going to be whatever our x bar - y bar is the actual sample difference -0.1075

That is our hypothesized population divided by the standard error s sub x bar – y bar.1085

That will give you how many standard errors away our actual mean difference is from 0.1097

Once you have this t value and you have the degrees of freedom, 1104

then you can find the p value and then you could reject or accept the null hypotheses.1113

Reject or do not reject, that is really the technical idea there.1121

Let us go onto some examples.1126

The Cheesy Cheesy cookies company wanted to know whether they should have a coarse or fine texture in their cheesy cookies.1131

They assembled a series of taste testing panels that tasted either the coarse 1140

or fine textured cookies and gave it a palatability score.1143

The higher score the better.1153

Is there a statistical difference in the mean palatability score between the two texture levels?1154

If you download the examples below and you look under the example 1, you should see a data set that looks like this.1162

This is the palatability score and this is the texture.1174

I believe that 0 = coarse and 1= fine, just so that we can make some sort of recommendation at the end.1177

Here we go, we have these different sets of scores, so this is the score that 1200

one panel came up with and that panel tasted coarse textured cheesy cookies. 1209

This panel also tasted coarse and that is the score it gave it.1214

Let us go up to fine.1221

They tasted fine texture and they give it that score. 1223

They also tasted fine and they give it that score.1227

You could go and see what the different scores are and what texture they had.1231

First, let us think about what our x and y?1240

What are our two independent samples?1245

The two independent samples here seem to come from the two different textures.1247

One group of scores they all tasted coarse texture cheesy cookies.1251

The other group of scores tasted fine textured cheesy cookies.1260

It might be helpful to us to sort this data by texture.1264

I am going to take this and I am going to ask.1270

It would work if I move score over.1281

What I am going to do is just hit sort.1291

Here these are all our coarse cheesy cookie, the palatability scores and here are my fine cheesy cookie palatability scores.1296

Let us think about how we want to approach this problem.1311

First thing we want to do is create some sort of hypothesize population.1315

Our hypothesize population is really going to say that the coarse and 1322

fine textured cheesy cookies there is really no difference between them. 1327

They are the same.1330

The mu sub x bar - y bar should equal 0.1332

The alternative is that they are different from each other in some way. 1337

We do not know which one taste better.1346

Let us just be neutral and say we do not know whether the coarse cheesy cookies 1352

are better than the fine or to fine cheesy cookies are better than the coarse.1358

We want to know whether these palatability scores are different or the same.1364

Let us set a significance level for how different they have to be.1370

Our significance level could be alpha= .05.1377

Finally let us set a decision stage.1386

Here I am going to draw SDOD, can we assume normality?1390

Well, they are different and let us look here.1398

We have 8 scores and 8 scores, the n is low.1405

Technically, we might not be able to do hypothesis testing.1416

Let us say for some reason that your teacher wants you doing anyway. 1424

But one of the things that should come up when you see low n like this is that you should question 1430

whether hypothesis testing is the right way to go because it may not reflect the conditions 1436

that we need to have set before we can assume all the stuff.1446

Just for the problem solving and practice here, let us go with that.1449

But if you want it to be smaller you can tell your instructor the conditions are meet for hypothesis testing.1454

Here we set our little lower n rejection and why do we just go ahead and put in our mu here.1466

It is going to be 0 and it will be helpful to find out that t values out here.1478

Let us go ahead and do that. 1483

What are our critical t?1486

Critical t or the boundaries.1491

In order to find the critical t, we are going to have to find the degrees of freedom, DF of differences. 1494

N sub x we will call x coarses.1503

X will be coarse cheesy cookies and y will be fine.1512

You can use c and f if you want to.1521

This is going to be 8 and this is also 8.1524

The degrees of freedom for each of these is 7 so this is going to be 14.1528

That is a pretty low degrees of freedom.1534

That is all we can assume normality here.1537

Let us find the critical t.1540

In order to find that we would use t inverse because we have the two tailed probability .05 and we have the degrees of freedom.1545

This gives us a positive version.1562

The negative version would just be the negative of that number because they are perfectly symmetrical. 1565

2.14 the critical t is + or -2.14.1573

Now that we have that, then we could go ahead and look at the actual samples themselves. 1581

Step 4, is we need to find the samples mean difference.1589

We need to find x bar – y bar, but we also need to find this mean differences t.1598

The t sub x bar - y bar.1606

We need to find that as well as the p value. 1610

Let us go ahead and do that.1613

We just started from step 3 and step 4 is really the mean difference and that is just the average of these guys - the average of these guys.1618

That is their average difference. 1656

This is saying that the coarse scores tend to be on average lower than 1662

the fine scores because we do course score – fine score.1668

We get a negative number.1671

The coarse score number must have been small.1672

Actually before we go on, it might be helpful to find the standard error of this situation.1677

In order to find the standard error of the difference we need to find 1690

the square roots of the variance of x ÷ n sub x + the variance of y ÷ n sub y.1699

This is going to be our standard error that we need.1717

In order to find that it would be helpful to find each of these pieces by themselves.1724

I guess we could find the whole thing, the variance of x ÷ n sub x and the variance of y ÷ n sub y.1731

I will put each of these on different lines like we can do all of it together.1750

We could just add them all up here.1754

Let us find that.1757

The variance, thankfully Excel has all these functions.1763

Let us check and make sure that this variance will give us n-1.1771

The variance of x ÷ 8 and the variance of all my fine cheesy cookie values ÷ 8.1778

We have these two variances and when we divide by n sub x we are getting the variance of the SDOM.1799

If we add those together then get the square root, then we get the standard error of the difference.1811

The square root of these two guys added together and that is 11.16.1820

Here I will just add this information so the standard error of the difference =11.16.1830

In order to find this t, we need to have this difference between the means -0 / the standard error of the difference. 1851

We can easily do that now. 1866

Here in order to find the sample t we could put the mean difference -0.1871

If you want to keep it technical you do not need that -0 / the standard error of the difference.1891

Our sample t says the difference is not at 0 it is actually way down here.1901

It is not significantly different. 1914

Well, one thing we could do is just operate here and compare this number to this number. 1917

This sub boundary here is -2.14.1923

-4.73 is like out here so we definitely know it is way significant.1928

It is way standing out from the expected mean but we can also find the p value. 1935

Now remember in Excel one of the things it needs a positive t value. 1944

If you have a negative t value you have to turn it into a positive one, but it is okay because it is perfectly symmetrical. 1951

The degrees of freedom that we are talking about are going to be this 1959

new combined degrees of freedom because we are always talking that the SDOM now.1963

This is the degrees of freedom for this SDOD and that is 14 and it is a two-tailed hypothesis.1969

Our p value is .0003.1976

I will not write the last up here but we can just talk about it.1981

The last step would be we reject or do not reject the null. 1991

Well, we reject the null here because our t value is much lower than our significance level.1997

Our t value, our sample t is more extreme than our critical t.2003

Here what we would say is that there is a statistical difference between the two texture levels.2010

One that is very unlikely to be attributed to by chance, because that is what this t values.2018

If it was by chance it would have .03% probability.2026

It is pretty low.2033

Example 2, scientists have found certain tree resins that are deadly to termites.2035

To test the protective power of resin protecting the tree, a lab prepared 16 dishes with 25 termites in each.2042

Each dish was randomly assigned to be treated with 5 mg or 10 mg of resin.2050

At the end of 15 days, the number of surviving termites was counted.2055

Assume that termites survival tends to be normally distributed with both dosage levels.2060

Is there a statistical significant difference in the mean number of survival for those two doses?2066

Now here I think it is worth than just discussing what will be our x and y.2072

Our x might be the 5 mg population and our y might be the 10 mg population.2077

The n sub x some people might think there are 25 termites but actually there is 25 termites in each of 10 Peachtree dishes.2087

There are 8 Peachtree dishes that have been randomly treated with 5 mg and 8 have been treated with 10 mg.2099

This is 8 and 8.2109

When I say 8, we mean the dishes of treatment and the termites are not the subject they are the cases that we are interested in.2113

The termites are the test.2124

You can get 25 termites surviving or you could get 0 surviving.2128

How many termites survived?2134

That is our dependent variable.2135

Okay, let us see. 2137

Well one thing we could do is start off with our hypotheses.2142

Our null hypotheses is that these two dosage levels are roughly the same.2146

We might say something like the mu sub x bar - y bar which is equal 0 are the same.2153

The alternative is that they are not the same. 2161

Maybe that one is more powerful than the other.2166

We do not know which one.2169

We could easily set our significance level to be .05.2173

Let us talk about the actual set up, the decision stage.2179

In the decision stage, let us see what we have here.2184

We have set up this .05 level rejection and we could just go ahead and this is the x bar - y bar, but what would be that t?2195

The nice thing about this being 0 is that the t distribution as well as the x bar – y bar start off the same.2213

They are not going to have the same numbers out here. 2226

Okay, so that is why we do have to put them on different lines.2229

They are still talking about different things.2233

Let us talk about the t values.2235

Before we do, it might be helpful to figure out the new degrees of freedom.2240

The degrees of freedom of differences will be 7 + 7 =14.2247

Here we can do hypothesis testing just jump in right away because given 2255

the termite survival tends to be normally distributed within these two dosage rates.2261

If you go to example 2, you will actually see the data here.2267

Here we see dosage and here is the 5 mg, as well as the 10 mg.2284

Here are the survival counts.2293

How many termites survived?2294

Notice that there is no survival count over 25.2296

25 is the maximum you can have, but even the highest gives me 16.2299

What if the survival count cannot go below 0 because we cannot have negative termite surviving.2304

Here we have the survival count.2311

Let us see what we have here.2317

Can we figure out what the critical t is.2323

Can we figure out what the critical t is?2329

I think we can.2335

Let us see.2336

You can use the book but I am going to use Excel to find the critical t.2338

I am going to write for myself step 4.2344

I know the two-tailed probability that I need .05 and I know my degrees of freedom is 14.2347

I see that the critical t is the same as before and because we use 2362

the same two tailed probability and the same degrees of freedom of differences.2367

Here we know that it is -2.14, as well as positive 2.14.2372

What we can do is now from here go on to looking at our actual sample.2384

This is actually step 3, it is a part of our decision stage. 2394

Step 4, is now actually talking about the sample. 2406

It will help to find the sample mean difference, so that is going to be the average of one of these x - the average y.2410

We want to know is this is difference going to be significantly different from 0?2431

We cannot just look at the raw scores because we need to figure out how many standard errors away we are.2436

How shall we find the standard error for the difference?2443

That is equal to the square root of the variance of x/ n sub x + variance of y/ n sub y.2448

Let us find the variance of x/ n sub x over and variance of y/ n sub y.2458

Let us find the variance of x/8 and the variance of y /8.2468

We see that the variance for y is a lot different than the variance for x.2486

That is helpful for us to just look at briefly right now just because this will probably give us an idea 2493

that the variance of samples are so different we probably do not have a good reason to pull these two together.2500

We do not have a good reason to assume that the populations are similar.2507

When in doubt go with non homogenous variances. 2511

Just assume that they are different. 2518

Once we have that then we can find the square root of adding these two standard errors together and we get 2.5.2520

Once we have all of that then we can find the samples mean difference t.2535

And that would be the samples mean difference -0 divided by the standard error of the SDOD.2548

What would that be?2572

That would be this guy and I am going to leave that subtract 0 part divided by the standard error and we get to 2.15.2575

We are close but it is still more extreme than 2.14.2586

It does not have to be extreme and the -n could be either extreme in the negative n or extreme in the positive n.2595

This is extreme in the positive n.2603

It is just right outside our borders.2607

Let us find the p value. 2609

In order to find that p value we use t distribution because we have the t value that 2611

we want the degrees of freedom and we wanted to be a two-tailed p value.2620

It is going to add up this little chunk and this little chunk together and that can be .049.2625

We will just skip step 4, our p value =.0449 that is right just a hair underneath our alpha.05.2635

We would probably reject the null.2653

Example 3, 2 months before smoking ban in bars, a random sample of bar employees were assessed on respiratory health.2657

Two months after the ban, another random sample of employees were assessed.2672

Researchers saw a statistically significant increase in the mean scores of health.2678

P= .049 we had an example of that two tailed.2684

Which of the following is the best interpretation for this result?2689

The probability is only .049 that the mean score for all of our employees increased from before to after the ban.2693

Is that what this means?2706

For me it helps to draw that SDOD and it is saying the null hypotheses would be 2708

the same like before and after are the same.2715

What they actually found is that there is some extreme value.2720

There is the increase in mean scores.2727

There is a positive difference from after – before.2735

There is the increased.2742

It is somewhere up here, that increase tells us that.2745

P= .04.2749

We can actually draw this carefully, it is just right above that cut off.2753

There is only .049 probability that the mean score for all bar employees increase.2760

That is not what this means.2775

It is not saying that there is only a small chance that it increase.2778

It is actually saying there is a pretty good chance that it is not the same.2783

There is a pretty small chance that it is the same.2787

This one we can just rule out.2792

Another possibility is that the mean score for all bar employees increased by more than 4.9%.2796

Does this p value actually talk about the raw score on respiratory health?2805

It does not talk about that score at all, it is the probability of finding such a difference.2814

It does not have anything to do with actual scores. 2821

What about this one?2825

An observed difference in the sample means as large or larger than the sample is unlikely to occur 2828

if the mean score for all bar employees before and after the ban were the same.2835

This actually have something we can use.2839

This is about considering that the means score for before and after are the same. 2842

That is important because that is what the SDOM actually represents.2851

That is what this p value is actually talking something about this idea that when we get the sample, 2854

we consider that they were just the same.2865

This is saying an observed difference in sample means as large or larger than a sample is very unlikely to occur.2867

It is likely to occur with .049% if the mean score for all bar employees the true score is actually the same.2876

This is a pretty good contender because the SDOD is talking about how .049 means very unlikely.2889

This I would leave as a definite contender. 2900

Maybe there is a better answer.2902

There is a 4.9% chance that the mean score of all bar employees after the ban is actually lower than before the ban.2905

There is a small chance of the opposite hypotheses picture that is probably not the case.2915

It depends on what the null hypothesis was.2925

The null hypothesis and a two mean hypotheses test is usually the same not the one is less than the other.2934

We do not usually do that.2953

Maybe there is a way and that could be true.2954

It is probably not true if we did hypothesis testing at all.2958

Only 4.9% of the bar employees had their score drop but the other 95% had their scores increase.2961

This would be a correct interpretation if we are not talking about the SDOD.2971

If this was not a reflection of the population then maybe that would be true.2977

This is not talking about population, it is talking about the SDOD.2982

This is a wrong interpretation.2987

The correct answer is c.2990

That is our last example for hypotheses testing with two independent means.2992

Thank you for joining us on www.educator.com.2998

Hi and welcome to www.educator.com.0000

We are going to talk about confidence interval and hypothesis testing for the difference of two paired means.0002

We have been talking about independent samples so far, one example, two independent samples.0008

We are going to talk about paired samples.0017

We are going to look at the difference between independent samples and paired samples.0020

We are also going to try and clarify the difference between independent sample 0025

and independent variables because paired samples still use independent variables.0029

We are going to talk about two types of t-tests.0035

One that we covered or also called hypothesis testing and one that we covered so far with independent samples.0039

The new one that was cover with paired samples.0046

We are going to introduce some notation for paired samples, go through the steps of hypothesis testing 0050

for paired samples and adjust or add on to the rules of SDOD that we already have looked at.0058

Finally we are going to go over the formulas that go with the steps of hypothesis testing for independent as well as paired samples.0069

We are going to briefly cover confidence interval for paired samples.0081

Here is the goal of hypothesis testing. 0085

Remember, with one sample our goal was to reject the null when we get a sample 0091

that significantly different from the hypothesized population.0098

When we talk about two-tailed hypotheses we are really saying the 0102

hypothesized population might be significantly higher or significantly lower. 0107

Either way, we do not care.0114

The sample is too low or too high, it is too extreme in some way. 0116

If that is the case, we reject the null.0123

In two samples, what we do is we reject the null when we get samples that 0125

are significantly different from each other in some way.0132

Either one is significantly lower than the other or the other is significantly lower than the one.0135

It does not matter.0141

Our null hypothesis becomes this idea that x - y either = 0 because they are the same 0142

and the alternative is that it does not equal 0 because they are different from each other.0152

If they are the same that is considered the null hypotheses and when they are different that considered the alternative hypotheses.0159

Remember another way you could write this is by adding y to each side and then you get x=y.0168

X = y they are the same.0174

In that way you know that you are covering the entire space of all the differences and the end of the day 0176

we can figure out whether they are the same or we do not think that the they are the same.0186

Let us talk about independent samples versus paired samples because from here on out,0195

we are totally going to be dealing with paired samples. 0203

It would help to know what those are.0205

Independent samples, the scores are derived separately from each other. 0208

For instance they came from separate people, separate schools, separate dishes.0212

The samples are independent from each other. 0219

My getting of the sample had nothing to do with my getting of this other sample.0222

In dependent, another word for paired, in dependent or paired samples the scores are linked in some way.0227

For instance, they are linked by the same person so my score on the math test and my score on the english test are linked because they both come from me.0236

Maybe we are one married couple, we ask one spouse how many children would you like to have 0248

and you ask the other spouse how many children would you like to have?0258

In that way, although they come from different people these scores are linked because they come from the same married couple.0262

Another thing might be a pre and post tests of the class.0269

Maybe a statistics class might do a pre and post test.0276

Maybe 10 different statistics classes from all over the United States picked to do a pre and post test.0279

Those tests are linked because the same class did the first test and the second test.0287

10 different classes did the pairs.0295

It is not just a hodgepodge of pretests scores and a hodgepodge of posttest scores, it is more like a neat line 0298

where the pretests scores for this guy, but for this class is lined up with the pretests scores for that class.0309

They are all lined up next to each other. 0317

We know these definitions, let us see if we can pick them out. 0319

Which of these is which?0327

The test scores from Professor x’s class versus test scores from professor y class.0329

Will these be independent samples because they just come from different classes?0336

They are not each score is not linked in any particular way.0341

River samples from 8 feet deep versus 16 feet deep.0346

This also does not really seem like paired samples unless they went through 0350

some procedure to make sure it is the same spot in the river.0355

That is probably an independent sample.0360

Male heights versus female heights, they just a jumble of heights over here and a jumble of heights over here.0364

They are not like match to each other.0370

They are independent samples.0372

Left hand span versus right hand span will in this case basically these two spans came from the same person.0375

It is not a hodgepodge like left hand right hand from person 1, left hand right hand for person2 or person 3.0384

I would say this is a paired sample.0392

Productive vocabulary of two-year-old infant often raised by bilingual parents versus monolingual parents.0395

It is a bunch of scores here and a bunch of scores here. 0402

They are not lined up in any way.0406

I would say independent.0408

Productive vocabulary of identical twins, twin 1, twin 2.0410

Here we see paired samples.0417

Scores on an eye gaze by autistic individual and age matched controls.0420

Autistic individuals often have trouble with eye gaze and in order to know that you 0427

would have to match them with people who are the same age who are not autistic.0432

Here we have autistic individual lined up with somebody who is their same age is not autistic.0438

They are these nice even pairs and each pair has eye gaze scores.0445

I would say these are paired samples.0452

Hopefully that give you a better idea of some examples of paired samples. 0457

What about independence samples versus independent variables?0462

What you will also see is IV.0469

In multi sample statistics like 2, 3, 4 samples we are often trying to find some 0471

predictive relationship between the IV and the DV.0477

The independent variable and the dependent variable. 0481

Usually this is often called the test or the score.0484

The independent variable is seen as the predictor and the dependent variable 0488

is the thing that is been predicted the outcome.0495

We might be interested in the IV of parent language and you might have two levels of bilingual and monolingual.0498

You might be interested in how that impacts the DV of children’s vocabulary.0519

Here we have these two groups, bilingual and monolingual.0534

We have these scorers from children and these are independent samples because 0542

although we have two groups these scores are not linked to each other in any particular way. 0550

They are just a hodgepodge of scores here and a hodgepodge of scores here. 0556

On the other hand, if our IV is something like age of twin.0560

We have slightly older like a couple of minutes or seconds, and younger.0572

We want to know is that has an impact on vocabulary.0582

We will have a bunch of scorers for older twins versus younger twins, but these scores are not just in a jumble. 0593

They are linked to each other because these are twins.0611

They are identical.0615

This is the picture you could draw and the IV tells you how you determine these groups. The paired parts tells you whether these groups scores are linked to some scores 0617

in the other group for some reason or another.0640

Here they are linked but here they are not linked.0642

In all t tests, we are calling them hypothesis testing.0646

We are going to have other hypothesis tests but so far we are using t test.0657

T tests always have some sort of categorical IV so that you can create different groups 0662

and in t-tests it is always technically two groups, two means, paired means.0668

The DV is always continuous.0674

The reason that the dependent variable or the scores always continuous is because you need to calculate means in order to do a t test.0678

We are comparing means too and looking at standard error and you can compute mean 0687

and standard error for categorical variables.0694

If you have a categorical variables such as you know, yes or no, you cannot quite compute a mean for that.0697

Or if you have a categorical variable like red or yellow, you cannot compute a standard error for that.0707

If you did have a categorical DV and a categorical IV, you would use what it is called the logistic test.0713

We are actually not going to cover that.0721

That does not usually get covered in descriptive and inferential statistics. T0723

Usually you have to graduate level work or higher level statistics courses.0727

There are two types of t test given all of this.0735

Remember all t tests have this.0740

These are all t tests.0742

Both of these t tests are going to use categorical IV and continuous DV.0743

The first kind of t test is what we have been talking about so far, independent samples t tests. 0750

The second type is what we are going to cover today called paired or dependent samples. 0762

Both of these have categorical IV and continuous DV.0769

Let us have some notations for paired samples.0778

Just like before, with two sample independent sample t test, for one example, 0784

you might call it x so that its individual members are x sub 1, x sub 2, x sub 3. 0792

Remember each sample is a set of numbers.0797

It is not just one number but a set of numbers.0800

Second sample, you might call y.0803

I did not have to pick x and y though.0807

I could pick other letters.0809

Y could just mean another sample.0810

You could have picked w or p or n.0816

We usually try to reserve n, t, f, d, k for other things in statistics, but it is mostly by culture more than we have to do it by rules.0820

Here is the third thing you need to know for paired samples.0837

With paired samples remember x sub 1 and y sub 1 are somehow linked to each other.0842

They either come from the same person or the same married couple or 0848

they are a set of twins or it is an autistic person and age matched control.0853

All these reasons why these are linked to each other in some way.0859

And because of that you can actually subtract these scores from each other and get a set of different scores.0865

That is what we call d.0872

D is x sub 1 – y sub 1.0874

What is the difference between these two scores?0877

What is the difference between these two scores and what is the difference between these two scores?0882

These are paired differences.0888

Let us think about this.0891

If the mean of x is denoted as x bar and the mean of y is denoted as y bar, what do you think the mean of d might be?0894

I guess d bar and that is what it is.0902

If you got the mean of this entire set that would be d bar.0907

Once you have d bar, you could imagine having a sampling distribution made of d bars.0912

It is not x bars anymore, sampling distribution of the mean is the sampling distribution of the mean of a whole bunch of differences.0924

That is a new idea here.0942

Imagine getting a sample of d, calculating the mean d bar and placing it somewhere here.0945

You will get a sampling distribution of d bars.0959

That is what we are going to learn about next. 0964

These are means of a bunch of linked differences.0966

When we go through the steps of hypothesis testing for paired samples it is going 0971

to be very similar to hypothesis testing for independent samples with just a few tweaks. 0979

First you need to stay to hypothesis and often our null hypothesis is that the two groups of scores, the two samples x and y are the same. 0985

Usually that is the null hypothesis. 0997

You put the significance level, how weird does our sample has to be for us to reject that null hypothesis.1004

We set a decision stage and we draw here the SDOD d bar.1013

We identify the critical limits and rejection regions and we find the critical test statistic.1020

From here on out I am going to assume that you are almost never going to 1027

be given the actual standard deviation of the population. 1033

From here on out I am usually going to be using t instead of z.1038

Then we use the actual sample differences and SDOD in order to compute the mean differences.1041

We are not dealing with just the means, we are dealing with mean differences, test statistics, and p value.1053

We compare the sample to the population and we decide whether to reject the null or not.1061

Things are very similar so far.1069

It is going to make us figure out what SDOD is all about.1073

The rules of SDOD we are now adding on to sampling distribution of 1083

the differences between means that we talked about before you.1093

We are going to add onto that.1100

The SDOM for x and y are normal then the SDOD is normal too.1103

That is the same here.1109

The mean for the null hypotheses now looks like this. 1111

Remember the SDOD with the bar, the mean here is no longer called the mu sub x bar - y bar because it is actually x bar - y bar.1116

A whole bunch of them and then you find the mean of them.1132

That is called d bar.1136

That is the new notation for the differences of paired samples. 1137

Here the mu of d bar for the null hypotheses equal 0. 1147

Remember for independent samples = that for mu sub x bar - y bar that = 0. 1153

It is very similar.1162

For standard error for independent samples when various is not homogenous, which is largely the case, 1164

what we would use is s sub x bar - y bar.1174

Instead here for paired samples, we would use s sub d bar.1182

Here what we would do is take the square root of the variance of 1188

the standard error from x and the standard error variance of y bar and add that together.1194

If you wanted to write that out more fully, that would be s sub x2 the variance of x / n sub x + variance of y / n sub y.1207

That is what you would do if life was easy and you have independent samples.1228

That is what we know so far.1238

What about for paired samples?1240

For paired samples you have to think about the world differently. 1242

You have to think first we are getting a whole bunch of differences then we are finding the standard error of those differences.1245

Here is that we are going to do.1253

Here we would find standard error of those differences by looking at 1256

the standard deviation of the differences ÷ how many differences we have.1263

This is a little crazy, but when I show you it, it will be much more easy to understand.1272

I think a lot of people have trouble understanding what is difference between this and this?1281

I cannot keep track all these differences.1287

We have to draw SDOD.1291

You have to remember it is made up of a whole bunch of d bars.1302

He is made up of a whole bunch of these.1312

You have to imagine pulling out samples, finding the differences, 1314

averaging those differences together, then plotting it here.1324

Each single sample it has a standard deviation made up of differences.1328

Once you plot a whole bunch of these d bars on here, this is going to have a standard deviation and that is called standard error.1337

Here we have mu sub d bar and this standard error is standard error sub d bar.1347

Standard deviations of d bar whereas this is just for one sample.1359

This guy is for entire sampling distribution.1367

Let us talk about the different formulas that go with the steps of hypothesis testing.1378

Hopefully we can drive home the difference between SDOD from before and SDOD now, we will call it SDOD bar.1385

For independent samples, first we had to write down a null hypothesis and alternative hypothesis.1398

Often a null hypothesis was that the mu sub x bar - y bar = 0 or mu sub x bar - y bar does not equal 0 as the alternative.1408

In paired samples our hypothesis looks very similar except now we are not dealing with x bar - y bars but we are dealing with difference bars. 1421

The average of differences. 1438

The mean differences. 1440

This is the differences of means.1442

This is mean of differences.1448

We will get into the other one.1453

Mu sub d bar does not =0.1457

This so far it seems like okay. 1463

Here difference of means and d bar is the mean of a whole bunch of differences.1467

We get a whole bunch of differences first, then we find the mean of it. 1484

Here we find the means first and we find the difference between the means.1489

This part is actually the same.1495

It is alpha =.05 usually two tailed.1500

Step 2, we got that.1510

Significant level, we get it.1515

Step 3 is where we draw the SDOD here.1517

Here we draw the SDOD bar.1521

Thankfully you could draw it in similar ways, but conceptually they are talking about different things. 1530

Here how we got it was we pulled a bunch of x. 1538

We got the mean then we pulled a bunch of y then we got the mean and subtracted those means and plotted that here.1543

We did that millions and millions of time with a whole bunch of that.1550

We got the entire sampling distribution of differences of means.1554

Here what we did was we pull the sample of x and y.1560

We got a bunch of the differences and then we average those differences and then we plot it back.1568

Here this is the sampling distribution of the mean of differences.1579

Where the mean go in the order is really important.1591

Here we get mu sub x bar - y bar, but here we get mu sub d bar.1599

In order to find the degrees of freedom for the differences here what we did was 1607

we found the degrees of freedom for x and add it to it the degrees of freedom for y.1615

We are going to do something else in order to find the degrees of freedom for 1620

the difference we are going to count how many differences we average together and subtract 1.1626

This is how many n sub d – 1.1637

Finally we need to know the standard error of the sucker.1644

The standard error of differences here we called it s sub x bar - y bar and that 1650

was the standard error of x, the variance of x bar + the variance of y bar.1659

The variance of these two things added together then take the square root.1670

This refers to this distribution with the spread of this distribution. 1676

This difference here is actually going to be called s sub d bar and that is 1688

going to be standard deviation of your sample of differences ÷ √n of those differences.1696

Last thing, I am leaving off step 5 because step 5 is explanatory.1707

Step 4, now we have to find the sample t.1719

Our sample is really two independent samples.1723

We have a sample of x and a sample of y.1732

Because of that we need to find the difference between those two means. 1734

We find the mean of this group first, the mean of this group and we subtract.1741

We find the means first then we subtract - the mu sub x bar - y bar.1747

I want you to contrast this with this new sample t.1756

Here we get a bunch of x and y, we have two samples.1761

We find the differences first then we average.1766

Here we find the average first and find a different.1773

Here we find the differences then we find the average.1776

That is going to be d bar.1782

D bar – mu sub d bar.1784

This is getting a little bit cramped.1790

We divide all of that by the standard error of the difference and you could substitute that in.1796

Divide all that by the standard error of the differences.1803

You see how here it really matters when you take the differences.1811

Here you find the differences first and then you just deal with the differences.1820

Here, you have to keep finding means first then you find the differences between those means.1824

Let us talk about the confidence interval for these paired samples.1830

The confidence intervals are going to be very similar to the confidence intervals that you saw before with independent samples.1841

I am just covering it very briefly.1849

Let us think about independent samples.1851

In this case, the confidence interval was just going to be the difference of means and + or - t × the standard error.1854

You need to put in the appropriate standard error and use the appropriate degrees of freedom as well. 1877

In confidence intervals for paired samples it is going to look very similar except instead of having the differences of means 1884

you are going to put in the mean difference d bar + or - t × the standard error. 1897

Remember standard error here is going to mean s sub x bar - y bar.1906

The standard error here is going to be s sub d bar.1914

In order to find degrees of freedom you have to take the degrees of freedom for x and add that to the degrees of freedom for y.1918

In order to find degrees of freedom you have to find the degrees of freedom for d 1928

your sample of differences and that equals how many differences you have -1.1935

Let us talk about examples.1945

There is a download available for you and says this data set includes the highway 1953

and city gas mileage for random sample of 8 cars.1958

Assume gas mileage is normally distributed.1962

It says that because we could see your sample is quite small so we do not have 1965

a reason to assume that normal distribution of the SDOM.1970

Construct and interpret the confidence interval and also conduct an appropriate t test to check your confidence interval interpretation. 1974

Here I have my example and going to example 1.1984

Here we have 8 models of cars, their highway miles per gallon, as well as their city miles per gallon.1989

You can see that there is a reason to consider these things as linked. 2004

They are linked because they come from the same model car.2010

Let us construct the confidence interval.2013

Remember in confidence interval what we are going to do is use our sample in order to predict something about our population.2018

Here we will use our sample differences to say something about the real difference between these two populations.2028

Here is the big step of difference when you work with paired samples.2036

You have to first find the paired differences so the set of d.2042

That is going to be one of these will take highway - the city.2048

That x1 – y1, x2 – y2, x sub 3 – y sub 3.2054

Here are all our differences and we can now find the average differences.2062

We can find the standard deviation of these differences and all the stuff.2067

Let us find confidence interval and this helps me to say what I need is my d bar + or - t × the standard error.2071

In order to find my t but in order to do that I need to find my degrees of freedom.2090

My degrees of freedom is just going to be the degrees of freedom of the d.2098

How many differences I have -1.2107

That is count how many differences they should have the same number of differences as cars -1 =7. 2110

Once I have that, I could find my t.2121

I also need to find d bar.2126

Let us find t.2130

I need to find t and t inverse and I probably am going to assume a 95% confidence interval.2134

My two tailed probability is .05 and my degrees of freedom is down here and so that will be 2.36.2146

Those are my outer boundaries and let us also find d bar, the average.2157

I almost have everything I need. 2165

I just need standard error.2172

Standard error here is going to be s sub d ÷ the square root of how many differences I have.2174

That is going to be the standard deviation of my differences ÷ the square root of 8 because I have 8 differences.2187

Once I have that, then I can find the confidence interval.2206

The upper boundary will be the d bar + t × standard error and the lower boundary is the same thing, except that this - t × standard error.2209

My upper boundary is that 10.6. 2244

My lower boundary is that 7.6.2249

To interpret my confidence interval I would say the real difference between highway miles per gallon 2253

and city miles per gallon I have 95% confidence that the real difference in the population is between 10.6 and 7.6.2264

Notice that 0 is not included in here in this confidence interval.2274

It would be 0 if highway and city miles per gallon could be equal to each other by chance.2280

There is less than 5% chance of them being equal to each other. 2288

Because of that, I would guess that we would also reject the null because it does not include 0.2295

Let us do hypothesis testing to see if we do really reject the null because it does not include 0 2304

I would predict that we would reject the null.2312

Let us go straight into hypothesis testing here.2314

First things first. 2317

The step 1, the null hypothesis this should be that the mu of d bar. 2320

Here let us do hypothesis testing. 2332

The first step is mu sub d bar is equal to 0.2344

Highway and city gas mileage are the same but the alternative is that one of them is different from the other.2356

That they are different from each other in some way.2366

It is significantly stand out.2369

This difference stands out.2371

That would be that mu sub d bar does not equal 0.2373

Step 2, my significance level, the false alarm rate is very low .05 and two tailed.2378

Let us set our decision stage.2392

I need to draw an SDOD bar and here I put my mu as 0 because the mu sub d bar will be 0.2397

Let us also find the standard error here. 2418

The standard error here is going to be s sub d bar and that is really the standard deviation of the d / √n sub d.2421

That I could compute here.2434

Actually, we already computed that because we have the standard deviation of the d bars / the square root of how many d I have.2439

That is .64.2449

What is my degrees of freedom?2455

That is 7 because that is how many differences I have -1.2458

Based on that I can find my t and my t is going to be + or - 2.36. 2466

Let us deal with our sample. 2476

When we talk about the sample t, what we really mean is what the x bar of our sample differences that would be d bar.2483

I would just put x bar sub d because it is a simpler way of doing it.2502

- the mu which is 0 / the standard error which is .64.2505

I could just put this here so I can skip directly to step 4 and I will compute my sample t.2512

I should say this is my critical t so that I do not get confused.2527

My sample t is going to be d bar - mu / standard error.2533

That is d bar - mu which is 0 ÷ standard error = 14.3.2546

I can also find the p value and I'm guessing my p value is probably be tiny.2564

Here 14.3 is really small. 2573

My p value is going to be t dist because I want my probability. 2577

I put in my t, my degrees of freedom which is 7, and I have a two-tailed hypotheses.2586

That is going to be 2 × 10-6.2593

Imagine .000002 given this tiny p value much smaller than .05 we should say at step 5 reject the null.2610

We had predicted that we would reject the null because the CI, the confidence interval did include 0.2630

Good job confidence interval and hypothesis testing working together.2636

Example 2, see the download again, this data set shows the average salary earned by first-year college graduates.2641

Graduated at the bottom or top 15% of their class for random sample of 10 colleges ranked in the top 100 public colleges in the US.2650

Is there a significant difference in earnings that is unlikely to have occurred by chance alone?2661

We want to know is there a difference between these top 15% folks and the bottom 15% folks.2667

They are linked to having graduated from the same college. 2674

We would not necessarily want to compare people from the top 15% of one college that might be really good to one 2678

to the bottom percentage of people from a college that might be not as great.2687

We would really want from the same college does not matter if you are in the top 15 or bottom 15%. 2693

If you go to example 2, you will see these randomly selected colleges and the earnings in dollars per year, salary per year for the bottom 15%, as well as the top 15%.2699

Because it is a paired sample what we want to do is start off with d or set up d.2718

What is the difference between bottom and top?2724

We are going to get probably a whole bunch of negative numbers assuming that top earners earn more than bottom.2729

Indeed we do, we have a bunch of negative numbers.2738

If you wanted to turn these negatives into positives, you just have to remember 2740

which one you decided as x and which one you decided to be y.2745

I will call this one x and I will call this one y.2750

It will help me remember which one I subtracted from which.2759

I am going to reverse all of these and it is just going to give me the positive versions of this.2764

Here is my d.2771

Let us go ahead and start with hypothesis testing.2773

This part I will do by hand.2777

Step 1, the null hypothesis says something that the top 15% folks and the bottom 15% folks are the same.2783

Their difference is going to be 0. 2796

The mu sub d bar should be 0 but the alternative is that they are different.2800

We are neutral as to how they are different.2807

We do not know whether one earns more than the other.2811

Whether they are top earns more than bottom or bottom earns more than the top.2813

We can use our common sense to predict that the top ranking folks might earn more, but right now we are neutral.2818

Step 2, is our alpha level .05 or significance level. 2827

Let us say two details.2834

Step 3, drawing the SDOD, the mean differences and here we will put 0.2837

And let us figure out the standard error.2850

The standard error here would be s sub d bar and that would be the standard deviation of d / √(n ) sub d.2857

We also want to figure out the degrees of freedom so that is going to be n sub b -1 and we also want to find out the t.2871

These are all things you can do in Excel.2881

Step 3, standard error is going to be s sub d bar and that will be s sub d ÷ √n sub d.2884

That will be the standard deviation of our sample of d ÷ the square root of how many there are and there is 10.2903

Here is our standard error. 2923

What is our degrees of freedom?2926

That is going to be 10-1 =9. 2930

What is our critical t?2935

We know it is a critical t because we are still in step 3 the decision stage.2939

We are just setting up our boundaries.2944

That is going to be t inverse because we already know the probability .05 two-tailed,2947

degrees of freedom being 9 and we get 2.26.2953

It is + or -2.26 those are our boundaries of t.2959

Step 4, this will say what is our sample t?2966

And that is going to be our d bar – mu / standard error/2973

I will write step 4 here and so I need to find t which is d bar – mu/ standard error.2981

I need to find the bar for sure and standard error.2994

My d bar is the average of all my differences and that is about $12,000 - $13,000 a year.3000

That is just right after college. 3016

I need to find the d bar - 0 ÷ the standard error to give me my sample t.3018

That is the difference between sample t and critical t.3033

8.05 is actually the average of the differences.3041

The top 15% are on average earning $13,000 more than the bottom 15%.3056

The sample t gives us how far that differences from 0 in terms of standard error.3065

We know that is way more extreme than 2.26.3075

Let us find the p value.3080

We put it in t dist because we want to know the probability.3083

Put in our t, degrees of freedom, and we have a two-tailed hypotheses.3087

That would be 2 × 10-5.3094

Our p value = 2 × 10-5 which is a very tiny, tiny number, much smaller than the alpha.3100

We would reject the null hypotheses.3113

Is there a significant difference in earnings that is unlikely to have occurred by chance alone?3118

There is always going to be a difference in earnings between these two groups of people, the top 15 and the bottom 15%.3125

Is this difference greater than would be expected by chance?3131

Yes it is because we are rejecting the model that they are equal to each other. 3135

Example 3, in fitting hearing aids to individuals, researchers wanted to examine whether 3141

there was a difference between hearing words in silence or in the presence of background noise.3151

Two equally difficult wordless are randomly presented to each person. 3156

One less than silence and the other with white noise in a random order for each person.3160

This means that some people get silence than noise, other people get noise and silence.3166

Are the hearing aid equally effective in silence or with background noise?3171

First conduct the t test assuming that these are independent samples then conduct the t test assuming that these are paired samples.3178

Which is more powerful?3185

The independent sample t-test or paired samples t test?3188

We need to figure out what it means by more powerful.3192

I need some scratch paper here because the problem was so long I am just going to divide the space in half.3196

This top part I am going to use for assuming independent samples.3205

They are not actually independent samples, but I want you to see the difference between doing them as independent sample and doing them as paired samples doing this hypothesis testing as paired samples.3211

Step 1, the hypothesis, the null hypothesis is that if I get these sample 3224

and they are independent this difference of means on average is going to be 0. 3231

The mu sub x bar - y bar is going to = 0. 3240

The alternative hypothesis is that the mu sub x bar - y bar does not equal to 0.3244

Here I am going to put alpha =.05, two-tailed.3252

I am going to draw myself an SDOD.3261

Just to let you know it is the differences of means.3268

Here we know that this is going to be 0 and we probably should find out the standard error.3273

The standard error of this difference of means is going to be the square roots of the variance of x bar + the variance of y bar.3283

I am going to write this out to be s sub x2/ n sub x + s sub y2 /n sub y.3303

The variance of x and x bar, the variance of x /n, the variance of y/n.3315

We will probably need to find the degrees of freedom and that is going to be n sub x – 1 + n sub y -1.3321

Finally we will probably need to know the critical t but I will put that up here.3340

Let us look at this data, go to example 3. 3345

Click on example 3 and take a look at this data.3353

Let us assume independent samples.3356

Here we are going to assume that this silence is just one group of scores and 3359

this background noise is another group of scores and they are not paired.3368

They are actually paired.3372

This belongs to subject one, these 2 belongs to subject 3, this belongs to subject 5.3374

Here is the list order, it is A, B.3380

We get A list first then list B and here is the noise order.3384

They get it silent first then noisy.3388

This guy gets noisy first then silent.3390

All these orders are randomly assigned and the noise orders are randomly assigned as well.3393

For this exercise, we are going to assume we do not have any of this stuff.3406

We are going to assume this is gone and that this just a bunch of scores from one group of subjects 3412

that listen to a list of words in silence and another group of subjects that listen to list of words in background noise. 3418

We do the independent samples t test and we start with step 3. 3427

We know we need to find the standard error, which is going to be the square root of the variance of x ÷ n(x) + the variance of y / n sub y.3433

All that added together and a square root.3473

We need to find the variance of x.3476

We need to find n sub x.3479

We also need to find the variance of y and n sub y before we can find standard error. 3481

Variance is pretty easy.3488

We will just call silence x and the count of this is 24.3491

The count for y is going to be the same, but what is the variance of y?3504

The variance of y slightly different.3513

In order to find this guy, the standard error, we are going to put in square root of the variance of x ÷ 24 + the variance of y ÷ 24.3521

We get a standard error of 2.26 and standard error gives it just in terms of number of words accurately heard. 3547

We also need to find the degrees of freedom.3564

In order to find degrees of freedom, we need the degrees of freedom for x + degrees of freedom for y.3567

The degrees of freedom for x is just going to be 24 - 1 and the degrees of freedom for y is also going to be 24 – 1.3574

The new degrees of freedom is 23 + 23 = 46.3586

Once we have that we can find our critical t.3593

Our critical t, we know that alpha is .05 so we are going to put in t in and 3606

put in our two-tailed probability and the degrees of freedom 46.3615

We get a critical t of + or -2.01.3620

Our critical t is + or -2.01.3625

I will just leave that stuff on the Excel file.3631

Given all this now let us deal with the sample.3636

When we find the sample t what we are doing is finding the difference in means and then find the difference 3641

between that difference and our expected difference 0 and divide all of that by standard error to find how many standard errors away we are.3653

Here I will put step 4, sample t.3666

In order to find sample t we need to find x bar - y bar - mu and all of that ÷ standard error.3672

Thankfully, we have a bunch of those things available to us quite easily.3697

We have x bar, we can get y bar, we can get standard error.3701

Let us find x bar, the average number of words heard accurately in silence and that is about 33 words.3708

The average number of words heard correctly with background noise, and that is 29 words.3723

Is the difference of about 4 words big enough to be statistically different?3732

We would take this - this and we know mu = 0 so I am going to ignore that / standard error found up here.3741

That would give us 1.75.3754

1.75 that is more extreme than + or -2.01.3758

1.75 we will actually say do not reject.3765

We should find the p value too.3769

This p value should be greater than .05.3773

We will put in t dist then our sample t, degrees of freedom which is 46 and we want a two tailed and we get .09.3777

.09 is greater than .05.3790

Step 5, fail to reject.3797

Now that we have all that, we want to know is it more sensitive?3809

Can we detect the statistical difference better if we used paired examples?3821

Let us start.3829

Here we would say p =.09 and 5 is failed to reject.3831

It is not outside of our rejection zone, it is inside our fail to reject zone.3846

Let us talk about the null hypotheses here.3855

What we are going to do is find the differences first then the mean of those differences.3860

We are saying if they are indeed not that different from each other that mean different should be 0. 3866

The alternative is that the mean difference is not equal to 0.3871

Once again alpha = .05 two tailed and now we will draw our SDOD bar which means 3877

it is a standard sampling distribution of means mean made of differences.3892

Here we want to put 0.3909

We probably also want to figure out standard error somewhere along the line, 3919

which is going to be s sub d bar which is s sub d ÷ √n sub d.3924

We probably also want to find the degrees of freedom, which is going to be n sub d -1.3933

We probably also want to find the critical t.3941

Let us find out that.3945

Here I will start my paired samples section.3949

I will also start with step 3. 3955

Let me move all of these over here.3957

Let us start here with step 33965

Let us find standard error and that is going to be s sub d not d bar ÷ √n sub d.3970

We can find s sub d very easily and we could also find n sub d.3986

First we need to create a column of d.3994

I will find the standard deviation of the d but I realized that I do not have any d.4002

The d look something like this silence - background noise.4008

This is how many more words, they are able to hear accurately in silence and background noise.4020

Here we see that some people hear a lot of words better in silence.4026

Some people here words better with a little bit of background noise. 4032

Some people are exactly the same.4035

We could find a standard deviation of all these differences.4037

We could also find the mean of them4045

The n of them will be the same as 24 because there are 24 people that came from.4053

There is 24 differences.4062

We could find out standard error. 4065

Standard deviation of d ÷ √24.4070

That is standard error, notice that is quite different from finding a standard error of independent samples.4076

Let us find degrees of freedom for d and that is going to be n sub d -1 and that is 24 -1.4086

Our critical t should be t inverse .05 two tailed=23 and we get 2.07.4101

So far it seems that our standard for how extreme it has to be is more far out.4127

That makes sense because the degrees of freedom is smaller than 46.4134

+ or -2.07.4139

Let us talk about our sample.4152

In order to find our sample t, we want to find the average of difference subtract from the hypothesized mu 4154

and divide all of that by standard error to find out how many standard errors away our sample mean difference is.4169

We also want to find p value.4179

Here is step 4, our sample t would be d bar - mu ÷ standard error. 4181

What is d bar and how would we find it?4196

Just use the average function and the average of our d like this d bar.4205

We can do d bar -0 / standard error= 2.97.4211

That is more extreme than 2.06.4226

Let us figure out why.4233

We might look at standard error, the standard error is much smaller and the steps are smaller.4235

How many steps we need to take to get all the way out to this d bar?4251

There is more of them than these the bigger steps. 4257

These are almost twice as big.4261

These bigger steps, there is few of them that you need.4263

That is what the sample t I get is how many of these standard errors, 4267

how many of these steps does it take to get all the way out to d bar or x bar – y bar?4273

We need almost 3 steps out.4279

What is our p value?4282

Our p value should be less than .05 that is going to be t dist.4287

Here is our t value I will put in our degrees of freedom and two tailed and its .007. 4293

That certainly less than .05.4303

Step 5, here we reject whereas here we fail to reject.4306

Since there is this difference and we detected it with this one but not with this one, 4313

we would say that this is the more sensitive test given that there is something to detect out there.4321

This is the difference if it does exist. 4328

This one is a little coarser, there is a couple of reasons for that.4331

One of the reasons is because the standard error are usually larger than the standard error of differences. 4336

Another issue is that x bar - y bar, the difference here if we look at x bar - y bar this difference is roughly around the same.4343

This difference is the same as this difference.4362

It is not that bad but it is that you are dividing by a smaller standard error here then you are here.4366

Here, the standard error is quite large. 4373

The steps are quite large. 4375

Here, the standard errors are small.4376

The steps are quite small.4379

It is because you are taking out some of the variation caused by having some people 4380

just being able to hear a lot of words accurately all the time with noise.4385

Some people are very good at hearing anyway.4392

They might have over a low number of scores but with d bar you do not care about those individual differences.4397

You end up accounting for those by subtracting them out.4405

Here this is a more sensitive test.4409

Here we get p=.006 and we reject.4412

Which test is more sensitive?4425

Which test is able to detect the difference, if there is a difference?4431

Paired samples.4435

That principles are little more complicated to collect that data but it is worth it because it is a more sensitive test.4436

Thanks for using www.educator.com.4448

Hi and welcome to www.educator.com.0000

Today we are going to talk more in-depth about type 1 and type 2 errors.0001

If you want to know more about power and effect size it is good to go through this lesson 0006

because it is going to help you understand some of the pictures that we are going to draw in the future. 0013

Here is the roadmap for today.0017

We need to know about these type 1 and type 2 errors, but we also need to know when we make those errors in relationship to hypothesis testing.0021

So far we only used t test as our hypothesis test.0033

We have shown these errors and their relationship to hypothesis testing before as a box, but frequently in hypothesis testing we draw distributions.0037

The SDOM to be more specific.0048

What I want to show you how the errors fit on this distribution picture.0051

We are going to show you how the box and the distributions fit together because these two things actually relationship to each other. 0058

They refer to the same concept. 0065

There are just 2 different ways of showing you that same concept.0067

We go through hypothesis testing, but in the real world there is some reality that either the null hypotheses is just true or the null hypothesis is false.0071

Although we do not know this reality, all we know is the result of our hypothesis testing.0086

There are two kinds of ways we can make errors.0092

We can make an incorrect decision by false alarming.0095

We reject the null, but we should not have rejected the null.0099

That is called the false alarm or a type 1 error. 0106

I used to get confused between which one is type 1 and type 2, these are arbitrate. 0110

I like to think of this as the more serious error when you successfully reject the null hypothesis that is a more extreme thing that you do.0116

This is actually more dangerous than this miss.0127

That is not much of an error but actually false alarming.0131

That is how I remember the number 1 error you should look out for.0136

The type 1 error is often also called the likelihood of false alarming.0142

The probability of false alarming and that is referred to as alpha.0151

If the reality that we do not know is that this null hypothesis is true we have a probability of false alarming with the rate of alpha.0157

We have the probability of failing to reject when we should have rejected, a correct failure your probability is 1-alpha.0171

These two things add up to 1.0186

The probability of false alarming + the probability of making a correct failure =1.0189

On the flipside, let us say that null hypothesis is false that is not a true picture or model of the world. 0199

Then we really should have reject it.0210

It is not true, we should reject it, that would be a correct decision and that is called the hit where we are rejecting the null when we should have rejected it.0213

That gives us the probability of hits.0226

We could be incorrect and fail to reject when we should have rejected that is also another incorrect decision.0230

That is the type 2 error.0242

It is a miss and the probability of miss is given as beta.0244

Beta + 1 –beta = 1. 0248

The probability of misses + probability of hits =1.0254

In which of these boxes is the sample statistic statistically significant?0262

In which of these boxes is our p value less than .05 or whatever our alpha level is.0274

Let us think about that.0280

When we reject the null hypothesis that means our test statistic in this case t is extreme.0282

Our p value is significant and remember we mean significant as it stands out. 0294

It is very weird. 0304

In this case, these two quadrants up here is what we should worry about.0307

This is the decision we need to worry about when we reject the null hypothesis.0315

The other possibility is that when we reject the null hypotheses and our p is significant we made a correct decision.0322

These are our two choices if we know that p is less than alpha or if our test statistic is extreme.0334

Here p is not significant. 0343

It is not too weird and because of that we will fail to reject and we can be correct in failing to reject 0349

or when we fail to reject we could be wrong by making a type 2 error.0358

Here is what I want you to know. 0363

Let us say we carry out hypothesis testing and I think I have a really low p value.0365

I am going to reject my null hypotheses.0372

Which error am I likely to make, a false alarm or a missed?0375

Since I rejected my null, the only error I can possibly make is this one where I reject the null and get wrong.0381

Let us say I go through my hypothesis testing and I get p=.4.0397

Let us say I do not reject my null.0405

What mistake or what error could I have possibly made?0408

The only error I can make is the missed error.0411

Here I fail to reject and I could be wrong in doing it.0414

Let us talk about distributions and how errors fit in here.0418

We have a one sample t test we set up some null population.0426

This is our null hypotheses population and our hypothesized mu might be 230.0431

We do not know whether our sample is part of this or it is part of some other population, not the null population.0443

We can hypothesize maybe it comes from some other population like this one.0454

When we set our alpha levels and create critical t and zones of rejection and all of that stuff what we are doing is recreating the line.0459

If our sample t is outside here then we are going to reject the null.0476

So far we have only colored in this part, but we really mean this part as well as all of this part.0492

That is our reject the null zone, this entire area. 0508

In order to find out whether we should reject the null or not we also need to look past the raw score.0514

We need to look past the raw score and we need to look at it in terms of the critical t.0528

The critical t might be whatever like -2. Something .0536

We need to find out this t value and so I am just going to make one.0544

Let us say this t value is 5.5 and if our t value is sufficiently extreme then we reject are null hypothesis.0549

This would be our critical t and this is our sample x bar, but this is our sample t.0560

And that is how it looks out here.0574

Our possibility of making an error is this little gray spot that I have colored in red.0577

Just in case my sample really does come from these areas, I should not have rejected the null.0587

If it happens by chance rule 50 heads in a row it is very unlikely but it is still possible.0596

It is still possible that I got this x bar even though this is the true population distribution.0613

This is my possibility of making a type 1 error.0621

We actually have to add this side up to this side type 1 error.0628

We know that this is alpha=.05.0640

This part is 1 – alpha which is .95 and that is our possibility of not rejecting given that the null hypothesis is true.0646

That is the example of one sample hypothesis testing.0660

This is the same picture as before.0666

I just written it more neatly for you by typing it out and you can think of this test statistic as just t.0669

I have just written the generic word test statistic to think of this as critical t and sample t.0676

Here is the important thing to realize.0682

This gray distribution here represents an SDOM and that is why this is mu sub x bar and there is also an x bar here as a sample. 0685

This SDOM actually represents the probability where the null hypothesis is true and that probability equal to 1.0696

Remember we talked about that before when we said the area underneath the normal distribution equal to 1.0706

This represents the possibility that this may not be true and that there exists some other population that our sample really came from.0713

We do not just know what that population is.0727

That is the probability that the null hypothesis is false.0731

That normal distribution also has an area =1.0736

What we can additionally find out is when we create the zones of rejection and we say anything outside of this critical t reject it.0744

We color in this area here.0759

What we are saying is this is the probability of rejecting given that the null hypothesis is true. 0761

This is the area where we fail to reject.0777

This probability right here represents the conditional probability of failing to reject given that h knot or null hypothesis is true.0784

And that equals 1 – alpha because this one equal alpha.0807

Those are the important things to remember. 0815

These are all conditional probabilities as we learned about previously in probability lessons.0819

Let us talk about a two sample t test.0826

The idea behind the two sample t test is almost exactly the same except there are just a couple of exceptions now.0830

Instead of a raw score we have difference of scores and we still have a test statistic.0838

Here our mean hypothesized difference between our non college sample and our college sample is going to be 0 because that means they are the same.0846

Remember, these are SDOD (Sampling distributions of differences of means).0862

This is 0 and this might be our actual sample difference x bar – y bar, the actual difference between the samples.0875

Same thing down here, we have this as our critical test statistic and this is our sample t.0887

We want to know whether our sample t is way far out, more extreme than our critical t.0902

Here this represents the probability that the null hypothesis, that there is no difference is true and that is =1.0910

Same thing here, the probability that the null hypothesis is false and actually there some other distribution we just do not know what that is.0923

We will draw it like a ghost with blue.0933

It is important to know that this mu is mu sub x bar - y bar because we are talking about SDOD.0936

That is why it is a difference of means. 0946

Once we know this, now what we need to do is figure out what these probabilities mean. 0950

Here, let me draw the cut off again, here we have our rejection zone and our fail to reject zone.0958

Once again we can find those conditional probabilities. 0977

What is the probability of rejecting given this thing is true, inside of this space where the null hypothesis is true?0981

What is the probability of failing to reject given that the null hypothesis is true?0992

That is the conditions that we are working under.0999

It is still the same. 1005

Here we see alpha and here we see 1 – alpha.1008

Ideally when we have these differences between distributions what we really would like is that 1018

there was very little overlap between these two distributions. 1027

The null distribution and the like real one that we do not know anything about.1031

It will be nice if there was very little overlap.1036

But in real life, there is usually a lot of overlap.1038

The real world is noisy and the real population might be very, very different. 1043

The real population might be very similar to the null population.1055

If that is the case, there is some overlap between their distributions.1071

There are some chances that we might get a score over here and it could be part of the real population or part of the null population.1077

If this is the case and we need to understand these conditional probabilities in anyway.1086

Get ready here is the deal. 1098

Instead of writing real population, I am going to say not null population because we do not know what it is.1100

It is just not the null population.1112

I am going to take this picture, this great curve and I will draw up here in two ways.1115

I am going to split it up into two parts. 1121

One part is going to be this blue part, this fail to reject region and that is that whole part.1123

Here I am also going to draw the red part.1144

I just draw it separated from each other so that you can see.1147

Here we have this little part and that is red and it is red because we have rejected it.1158

This is the case where we are actually wrong.1167

This is the case where we are actually right.1171

Here we are wrong because we rejected the null hypotheses that we should not have rejected.1174

Here we are correct, because we fail to reject and truly we should not have rejected it.1180

Now that is the case if the null hypothesis population is true.1185

What happens in a case where it is not true?1193

The null hypothesis is false.1200

What happens here?1203

Here I am going to draw a different looking picture because I'm going to draw this curve but this curve split up.1206

Here I am going to split this curve up like this. 1218

On this side of the line I am going to draw this little section and draw just this little section.1227

That part of it I have failed to reject.1253

That is wrong so I am going to color it in red because we should have rejected it but we fail to reject it.1257

On the other side, I am going to try the other part of this curve.1274

It is this part and here I am going to color that in blue because although we rejected it we should have rejected it.1279

Here we rejected the null hypothesis and you are right we should have rejected it because we are in this new unknown population.1292

You should have rejected it.1308

Let us look at the places where we are correct.1310

We are correct here and this is called a correct failure.1314

Here we are also correct and this is called a hit.1319

Here we are incorrect and that is called a false alarm.1331

Here we are also incorrect and this is called a miss. 1344

It is a miss because we have failed to reject it.1352

We failed to hit the target when we should have hit the target.1357

Given that, let us see how the distributions and the box go together.1361

The false alarm is really that place.1369

Remember when the hypothesis is true I am going to draw it in black.1373

The correct decision is going to be this whole section where we fail to reject, but that is okay we are in this fail to reject zone.1378

You are good to go. 1393

Here is the other part of this part.1395

Here this is an error because we have rejected when we should not have rejected because it is actually true.1401

This is our false alarm. 1416

Now, in the case of a correct decision where you actually hit it, this means you rejected it and it is good 1418

that you rejected it because actually a different population is true, not this null population.1430

That is going to be the area where you reject, all rejections are going one on the right side of this line.1438

You should have rejected it because you are in a different population.1454

You are not in the null population.1461

This is a good thing for you.1462

You should have rejected it.1466

The other part of that, the other piece of that is down here.1469

It is this little piece down here.1474

Here it is incorrect, because although you are part of a different population, not the null population, you did not reject it.1477

You fail to reject.1491

I want you to notice something here.1493

All the fail to reject are always on this side of the line because these are values that are less extreme than the mean.1500

And the rejection ones are all in this side of the line. 1508

I could also drawn it two tailed and also showing you the side but I'm showing you one tailed.1511

It is all outside of the line, on the outer boundaries of this line, more extreme than the hypothesized mean.1517

This is less extreme than the hypothesized mean.1524

My hypothesized mean is somewhere here, less extreme than that.1527

It is relative to the hypothesized mean.1533

That is how these four pictures fit together.1539

When you see those two distributions drawn, do not get confused you already know it. 1543

You just have to break it apart in slightly different ways.1548

Let us go on to some examples.1553

On the basis of results from a large sample of students from a university, a professor reports the mean high from my sample is not significantly below 60.1556

That means he did not reject.1573

This is fail to reject.1576

If he said significantly that would be rejecting the null.1581

Which type of error will this professor worry about?1586

He failed to reject, that is important to know.1590

What is the only error you can make if you fail to reject?1593

Well if you fail to reject, but you should have rejected it, the null hypotheses is false, what kind of error is that?1596

That is a missed and a type 2 error.1617

The error rates are given by alpha and beta and this is actually beta so these are wrong. 1624

These are both correct rates instead of the error rates and this is nonsense having a non significant results are all error statistically.1631

It is never the case.1642

You are damned if you do and damned if you do not.1643

There is always a way you can make an error either type 1 or type 2.1645

Example 2, a researcher worries about trying incorrect conclusion.1649

The researcher plan to select a sample of size 20 and to use the .01 level of significance.1655

Here alpha is .01.1662

In a two tailed test of the null hypothesis the critical t should be + or - because it is a two tailed test.1664

It is + or -2.86. 1676

If he obtains the t of 2.8 which type of error would he be worried about and why?1681

Well, you definitely know that he is not going to reject.1695

Fail to reject because this is less extreme than this.1704

This is less extreme so he fail to reject.1717

The only error you can have when you fail to reject is if you fail to reject given the null hypothesis is false.1722

What kind of error is that?1729

That is a missed or type 2.1733

What if he obtains a t of 2.869 which type of error would he be worried about?1744

That is more extreme than this.1752

In this case he would reject the null.1754

When is he wrong when he rejects?1757

When he should have not rejected it because the null hypothesis is actually true.1760

What kind of error is that?1765

That is a false alarm or type 1 error.1768

Example 3, what is the danger of the type 1 error?1776

This is a more conceptual question. 1782

The danger is mistakenly concluding that there is no significant difference between the obtained mean and the hypothetical population mean. 1785

When you make a type 1 error you have rejected the null but null hypothesis is true.1794

Mistakenly concluding that there is no significant difference but that is not true 1808

because you concluded that there is a significant difference that is why you rejected the null.1814

Mistakenly concluding that there is a significant difference between the obtained mean and the hypothetical population mean.1818

That is true.1826

You mistakenly rejected the null and said there is a significant difference but you should not have done that.1829

Mistakenly being alarmed about a hypothesis when you should become.1838

That is non sense.1843

Mistakenly calculating the wrong test score.1844

These errors are not errors that you can actually avoid.1847

These are not errors because we were sloppy. 1851

These are errors that are made because we do not know the real nature of the world. 1854

This is actually not what we are talking about when we are talking about type 1 or 2 errors.1860

Mistakenly choosing the wrong population standard deviation to calculate standard error, that is not it either.1865

These two are just regular old mistakes or errors in calculation.1872

They are not type 1 and 2 errors of hypothesis testing.1878

That is it for type 1 and 2 errors.1881

Thank you for using www.educator.com.1885

Hi, welcome to educator.com. 0000

We are going to talk about effect size and power. 0002

So effect size and power, 2 things you need to think about whenever you do hypothesis testing. 0005

So first effect size. 0011

We are going to talk about what effect sizes is by contrasting it to the T statistic. 0013

They actually have a lot in common but there is just one subtle difference that makes a huge difference. 0019

Then we are going to talk about the rules of effect size and why we need effect size. 0026

Then we are going to talk about power. 0032

What is it, why do we need it, and how do all these different things affect power for instance sample size, 0035

effect size, variability in alpha, the significance level. 0042

So first things first, just a review of what the sample T really means. 0048

So a lot of times people just memorize the T formula, it is you know the X bar minus mu over standard error but think about what this actually means. 0056

So T equals X bar minus mu over the standard error. 0070

And all right that is S sub X bar. 0076

What this will end up giving you is this distance so the distance between your sample and your hypothesized mu. 0079

And when you divided by standard error you get how many standard errors you need in order to get from bar to your mu. 0087

So you get distance in terms of standard error. 0097

So distance in terms of standard error. 0102

And you want to think of in terms sort of like instead of using like feet or inches or number of friends, we get distance in the unit of standard error. 0111

So whatever your standard error is for instance here that looks about right, because this is the normal 0123

distribution that should be about 68% so that is the standard error. 0132

Your T is how many of these you need in order to get to T. 0139

So this might be like a T of 3 1/2, 3 1/2 standard errors away gets you from mu to your sample difference and so this is the case of the two sample t-test. 0146

So independent samples are paired samples where we know the mu is zero. 0164

So this is sort of the concept behind the T statistic. 0169

Now here is the problem with this T statistic. 0175

It is actually pretty sensitive to N. 0181

So let us say you have a difference that is very, that is going to stay the same so a difference between, you know let us say 10 and 0. 0185

So we have not that difference. 0199

If you have a very very large N then your S becomes a lot skinnier. 0202

And because of that your standard error is also going to shrink so the standard error shrinks as N shrinks. 0212

And because of that, even though we have not changed anything about this mean, about the X bar or mu, 0226

by shrinking our standard error we made our T quite large. 0237

So we made our T like all of a sudden were 6 standard or errors away but I really have not changed the picture. 0244

So that is actually a problem that T is so highly affected by N. 0252

The problem with that is that you could artificially make a difference between means, look statistically significant by having a very large N. 0258

So we need something that tells us this distance that is less affected by N and that is after effects size comes in. 0268

So in effect size what we are doing is we want to know the distance in terms of something that is not so affected by N. 0278

And in fact we are going to use the population standard deviation because let us think about T. 0288

So that is X bar minus mu over standard error. 0295

So this contrast that to looking at the distance in terms of the standard deviation of the population, what would that look like. 0303

Well, we could actually derive the formula ourselves. 0307

We want that distance in terms of you know number of inches or number problem correct or whatever the 0321

raw score is over instead the standard error we would just use S or if you had it you would use Sigma. 0328

So you could think of this as the estimated Sigma and this is like the real deal Sigma. 0341

And that is what effect size is and effect size is often symbolized by the letters D and G. 0349

D is reserved for when you have when you have Sigma, G is used for when you use S. 0360

Now let us talk about the roles of effect size. 0367

The nice thing about effect size is that the N does not matter as much whether you have a small sample or large sample the effect size stays similar. 0373

In test statistics suggest T or Z, the N matters quite a bit and let us think again about why. 0384

So the T statistic I have been writing at so far as over standard error but let us think about what standard error is. 0396

Standard error is S divided by the square root of N, now as N gets bigger and bigger so let us think about N getting bigger. 0406

This whole thing in the denominator, this whole idea this whole thing becomes smaller and smaller. 0417

And when you divide a positive or negative or positive, if you divide some distance by a small number then 0430

you end up getting a more extreme value, more extreme. 0441

So by more extreme I mean way more positive, more positive ,more on the positive side or way more negative. 0448

So the T statistic is very very sensitive to N so is the Z because Z the only difference is instead of S we use Sigma. 0463

And so the same logic applies but for effect size T and G we do not divide by square root of N so in that way N does not really have as much. 0474

Okay so one thing to remember is if you know Sigma use covens D, if you need to estimate the standard deviation from the sample S, you want to use hedges G. 0488

Okay so now you know what effect size is and it is nice that it is not as affected by N but why do we need it? 0500

Well effect size is what we use, the statistically used to interpret practical significant so for instance we 0839.8 might have some sort of very small difference between group 1 and group 2 so with the males and females 0510

on some game or task, there is a very tiny difference like you know let us just say males are ahead by .0001 points. 0527

And practically, it sort if does not matter but if you have a large enough effect size if you have a large 0540

enough N you could get a small enough standard error that you can make that tiny difference seem like a 0549

big deal and you can imagine that would be sort of odd situation. 0556

We take a difference that sort of does not matter but then make a big deal out of it because of some fancy statistics we did. 0565

Well, that effect size is not going to be affected by N and so that going to give you more straightforward 0573

measure of is this difference big enough for sort of just care about. 0580

It is not going to tell you whether it was significant or not based on hypothesis testing but it can give you0584

the idea of practical significant and here were using the modern term for significant as in important. 0592

It will tell you a bit practical importance not statistical outlier nests, that is how it is telling you, it is talking0601

about just regular old practical importance and the way you can think about this is just thinking about it as is this different worth noticing. 0614

Is that worth even doing statistics on? 0623

The thing about hypothesis testing is that it could be deceiving, a very large sample size can lead to a 0625

statistically significant one of these outlier differences that we really do not care about that just has no practical significant. 0632

So here although we have been trying to talk about this again and again trying to sort of clarify that 0641

statistically significant does not mean important it just means it lies outside of our expectation. 0648

It is important to realize once again that statistical significance does not equal practical significant. 0656

This is sort of talking about how important something is and this is just sort of saying, does it stand out? 0663

Does our X bar our sample actually stand out? 0672

Okay now let us move on to power. 0679

What is power? 0684

Well, how we really needs to go back to our understanding of the two types of errors. 0685

Remember in hypothesis testing we can make an error in two different ways. 0691

One is the false alarm error and we set that false alarm error rate by Alpha and the other kind of error is 0695

this incorrect decision that we can make called the miss. 0704

A miss is when we fail to reject the hypothesis, that the null hypothesis but we should, we really should. 0708

And that is signified by beta, by the term beta. 0717

Now when the null hypothesis is true then we can know if we already set our probability of making 0725

incorrect decision, just like subtraction we can figure out our probability of making a correct decision so if 1225.5 our probability is .05 in making incorrect decision, the other possibility is that we may correct decision 95%, 1-.05. 0739

In the same way when the null hypothesis is actually false we could figure out our probability of actually 0756

making a correct decision by just subtracting our probability of making incorrect decision from one. 0764

So this would be one minus beta. 0772

In that way these two decisions that we make they add up to a probability of one and this 2 decisions that we can make add up to probability of one. 0775

But in reality only one of these worlds is true that is why they both have a probability of 1. 0787

We just have no idea whether this one is true or this one is true and anyone can never really say but that is the philosophical question. 0794

So given this picture, power resides here and this quadrant is what we think as power. 0802

Now power is just the idea given that the world is actually false, that this world we live in pretend we 0811

ignore this part right so I am just, just ignore this entire world, given that this null hypothesis is false, what0824

is our probability of actually rejecting the null hypothesis and that is what we call power. 0835

So think of this as the probability of rejecting null when the null is false. 0843

So why do we need power, why do we need 1 – beta? 0855

Well, here it is going to come back, those concepts come right back. 0864

Remember the idea that you know sometimes we wanted to detect some sort of disease right and we 0873

might give a test like for instance we want to know whether someone has HIV and so we give them a blood test to figure out, do they have HIV. 0879

Now this test are not perfect and so there is some chance that they will be able to detect the disease and some chance that will make a mistake. 0888

There is two ways that there is two ways of thinking about this prediction. 0897

One is what we call a positive predictive, value we could think about what is the probability that someone has the disease for instance HIV given that they test positive? 0903

Well this will help us know what is the chance that they actually have the disease once we know their test score. 0916

In this world we know their test scores and we want to know what is the probability that they have the disease. 0926

On the other hand we have what is called sensitivity. 0932

Sensitivity thinks about the world in a slightly foot way. 0936

Given that this person has the disease, has whatever disease such as HIV, one is the probability that they will actually test positive. 0940

And said that at these two actually give us very different world. 0950

In one world the given is that they have a positive test and what is the probability that they have the disease versus no decease. 0954

In this scenario the given is very different. 0967

The given is that they actually have the disease. 0973

Given that what is the probability that they will test positive versus negative? 0976

And so they are looking at this or they are looking at this. 0983

Now power is basically the probability of getting a hit, the probability of rejecting that null hypothesis given 1637.9 that the null hypothesis is actually false so it is actually wrong. 0988

Is this more like PPV, positive predictive value? 1004

Or is it more like sensitivity. 1010

Well let us think about this. 1012

In this world there is this reality that the given reality is that this is false. 1014

We need to reject it. 1022

What is the probability that will actually be rejected? 1027

So reject or fail to reject. 1032

Well one way of thinking about this in a more, in the comparison is to consider, what is this thing that we 1040

do not know in these two scenarios? 1052

We do not really know if they actually have HIV. 1055

We know their test we know that their test is either positive or negative and the test is uncertain but 1059

whether they actually have HIV or not, that does not have uncertainty, it is just that we do not know what it is. 1065

This is sort of like HIV in that way. 1074

This is the reality so HIV is the reality and this, this is the test results. 1078

This is also the reality and these are the results of hypothesis testing. 1088

And so in that way this picture is much more like sensitivity. 1101

And really when we apply the word sensitivity we see a whole new way of looking at power. 1107

Power is the idea how sensitive is your hypothesis test when there really is something to detect, can it detect it? 1116

When there really is HIV, can your test detect it? 1125

When the null hypothesis really is false, can your test detect it? 1129

That is the question that power is asking. 1136

Okay if you calculate power is there nice little formula for? 1139

Well power is more like the tables in the back of your book. 1145

You cannot like calculate with like one simple straightforward formula. 1148

There is actually more complex formula that does not both calculus but we can simulate power for a whole 1153

bunch of different scenarios and those scenarios all depend on outline effect size and also variability in 1161

sample size and because of that power is often found through simulation. 1169

So I am not going to focus on calculating power, instead I am going to try to give you a conceptual understanding power. 1174

Now often a desired level of power and sometimes you may be working with computer programs that might calculate power for you. 1187

A different level of power that you want to shoot for is .8 or above but I want you to know how this power interact with all this things. 1195

All of these things actually go into the calculation of power but I want you to know what is the conceptual level. 1206

So how does Alpha or the significance level affect power, how does affect size, D or G affect power, how 1212

does variability S or you know, S squared affect power and how to sample size affect power. 1224

Okay so first thing is how does Alpha affect power? 1234

Well here in this picture, I shown you 2 distribution. 1241

You could think of this one is the null distribution and this one as the alternative distribution. 1247

And noticed that both of these both of these distributions up here are exactly the same down here I just copied and pasted. 1254

The only thing that is different is not their means or the actual distribution. 1263

The only thing that is different is the cut off. 1276

Since here, the cut off scores right here and this is the alpha, and hear the cutoff score has been moved sort of closer towards the population mean. 1279

And now we have a huge Alpha. 1297

So let us just assign some numbers here. 1301

I am just guessing that maybe that looks like alpha equals .05 that something more used to see, but this looks like maybe Alpha equals let us say .15. 1304

What happens when we increase our Alpha? 1317

Our Alpha has gotten bigger, what happened, what happens to power? 1323

Well it might be helpful to think about what Power might be? 1326

In this picture, remember, power is the probability of rejecting the null hypothesis when the null hypothesis1331

is actually false and here we often reject when it is more extreme than the cutoff value when your X bar is 1344

more extreme so these are the rejections of everything on this side. 1358

All of this stuff is reject, reject the null. 1362

And we want to look at the distribution where the null hypothesis is false a.k.a. the alternative hypothesis. 1369

So really were looking at this big section right here so here this big section, that is power, one minus Beta1380

given that you could also figure out what beta is. 1396

And beta is our error rate for misses. 1400

When we fail to reject, fail to reject but the alternative hypothesis is true or the other way we could say it is the null hypothesis is false. 1406

So what happens to power when Alpha becomes bigger? 1421

Well, let us colour in power right here and it seems like there is more of this distribution that is been colored in than this. 1430

So this part has been sort of added on, it used to be just this equals power but now we also have added on the section. 1440

So as Alpha increases, power also increases. 1453

And hopefully you can see that from this picture. 1464

Now imagine moving Alpha out this way so decreasing Alpha. 1467

If we decreased alpha then this power portion of the distribution that power part will become smaller so 1472

the opposite, sort of the counterpoint to this is also true as Alpha decreases, the power also decreases. 1483

But you might be asking yourself, then why cannot we just increase alpha so we could increase power right? 1496

Well, remember what Alpha is, alpha is your false alarm rate. 1511

So when you increase Alpha, you also increase your false alarm rate. 1516

So at the same time if you increase your false alarm rate your increasing power. 1521

And so this often is not a good way to increase power. 1526

But you should still know, with the relationship is. 1533

How about effect size, how does effect size affect power? 1538

Well remember, effect size is really sort of a, you can think of it roughly as this distance between the X bar and the mu. 1544

We are really looking at that distance in terms of standard deviation of the population. 1555

How does effect size affect power? 1561

Here I have drawn the same pictures, same cut off except I have moved this null, this alternative 1564

distribution a little bit out to be a more extreme so that we now have a larger distance, larger distance. 1574

And so this is a bigger effect size, bigger effect size so what happens when we increase the effect size and1587

we keep everything else constant, that the cut off, the null hypothesis, everything. 1600

Well, let us colour in this, and colour in this. 1605

Which of these two blue areas is larger. 1612

Obviously this one. 1616

This power is bigger than this power and it is because we have a larger effect size so another thing we have1618

learned is that larger effect size it leads to larger power so as you increase effect size you could increase power but here is the kicker. 1627

Can you increase effect size? 1646

Can you do anything about the effect size? 1649

Is there anything you could do? 1651

Not really. 1655

Effect size is something that sort of out there in the data but you cannot actually do anything to make it 1657

bigger but you should know that if you happen to have a larger effect size then you have more power than if your study of a small effect size. 1663

Okay so how does variability and sample size affect power? 1674

Now the reason I put these two things together is that remember, this distribution are S Toms, right? 1686

And so the variability in a S Tom right is actually standard error and standard error is S divided by the square root of N. 1696

So both variability and sample size will matter in power. 1711

And so here I want to show you how. 1720

Okay so here I have drawn the same means of the population of the S Toms and remember here we have 1723

the null hypothesis and the alternative hypothesis distribution. 1733

I have drawn the same pictures down here and I kept the same Alpha about .05. 1739

So I had to move the cut off a little just so that I could color in .05 but something has changed and that is this. 1749

This a lot skinnier than this is, that is less variability so that S Tom has decreased in variability. 1760

So here standard error has decreased so we have sharper S Toms. 1772

Still normally distributed, just sharper. 1786

And so when we look at these skinnier distribution let us look at the consequences for power. 1790

Here lets color in power and let us color in power right here and it also helps to see what beta is. 1798

So here we have a quite a large beta and here we have a tiny beta. 1810

And so that makes you realize that the one minus Beta appear the power here is larger than the one minus Beta down here. 1815

This is smaller than the 1 – beta down here because remember were talking about proportions. 1832

This whole thing add up to 1, now this might look smaller to you, the whole thing adds up to one. 1849

If this is a really small proportion so let us do a number on it, that was less than .05. 1855

Let us go on .02. 1861

This looks bigger than .05 so let us go on .08. 1863

Then 1 - Beta here would be 92% and 1 - Beta here would be 98% so this is a larger power than this. 1869

So one thing we have seen is our standard error decreases so it is decreasing then power increases so this is what we call a negative relationship. 1879

As one goes down the other goes up and vice versa as standard error increases as these distribution 1899

become fatter and fatter then power will decrease overall the opposite way. 1908

Now because we already know this about standard error we could actually say something about sample 1913

size because sample size actually has the opposite it also has a negative relationship with standard error 1921

and sample size go get bigger and bigger and bigger standard error gets smaller and smaller and smaller 1929

and so sample size actually have a positive relationship with power so as sample size increases and 1936

therefore standard error decreases, power increases. 1947

And so we could figure that out just by reasoning through what standard error really mean. 1954

Okay so how do we increase power because often times your power or sensitivity is really a good thing. 1965

We want to be able to have experiments and studies that have a lot of power that would be a good hypothesis testing adventure to embark on. 1976

How do we actually increase it. 1987

Well can we just do this by changing Alpha? 1989

Well the problem with this is that you get some consequences namely that falls alarms increase. 1994

So if you increase power with this strategy you are also going to increase false alarm, that is very dangerous so that is not something we want to do. 2010

That is type 1 error so that is something we do not want to do. 2020

So you do not want to change power by changing Alpha although that is something under our control. 2023

Now we could try to change effect size but because of effect size is something that is already sort of true in2029

the world right like what we have to do to mess with standard error of the standard deviation of the 2039

population, we cannot mess with that so this is actually something that is impossible to do. 2045

So that is one thing that we wish we could do but cannot do anything about. 2052

Can we change the variability in our sample, can we change the variability? 2067

Indirectly, we can. 2072

There is really one way to be able to change standard error. 2075

Can we do this by changing the standard deviation of the population? 2081

No, we cannot do that, that is out of our control. 2085

But we can change N. 2090

We can collect more data instead of having 40 subjects or cases in our study, we can have 80. 2093

And in that way we can increase our power and so really the one thing that sort of one tool that sort of 2102

available to us as researchers in order to affect power is really affecting sample size. 2110

None of these other things are really that appealing to us. 2116

We cannot change population variability, we cannot change effect size and if we change Alpha then that is a dangerous option. 2120

And so what we have left here is affecting sample size. 2133

Now let us go on to some examples. 2139

Statistical test is designed with a significance level of .05 sample size of 100. 2144

As similar test of the same null hypothesis is designed with a significant level of .1 and a sample size of 100. 2149

If the null hypothesis is false which test has greater power? 2160

Okay so let us think about this. 2165

Here we have a situation one test one where Alpha equals .05. 2168

Test 2 the other test, alpha = .10 so here Alpha is larger. 2178

Remember alpha is moving that critical test statistic so we have taken this and let us have this Alpha right2190

here and what we do is we moved it over, moved it over here, well not that far but just so you can get the idea. 2205

And now our Alpha is much bigger but what we see is that our beta, our 1 - Beta has also gotten a lot bigger. 2217

So here we see that power increases but we should also note that now we have a higher tolerance for false 2231

alarms so we will also have more false alarm, will have more times when we reject the null period so we 2244

will reject the null lots of time sometimes will be right, sometimes will be wrong, both of this things increase. 2251

Example 2. 2258

Suppose the medical researcher was to test the claim of the pharmaceutical company that the mean number of side effects per patient for new drug is 6. 2261

The researcher is pretty sure the true number of side effects is between 8 and 10 so there like 2270

pharmaceutical company not telling the whole truth. 2277

Shows a random sample of patients reporting side effects and chooses the 5% level of significance so Alpha equals .05. 2281

Is the power of the test larger is the true number of side effects is 8 or 10. 2288

So let us sort of think about okay what is the question really asking and then explain. 2295

So is the true number of side effects is 8 or 10 is really talking about your mu? 2302

And actually, here we are talking about the alternative mu because the null mu is probably going to be six. 2309

So here is the null hypothesis. 2325

The null hypothesis is that the pharmaceutical company is telling the truth. 2330

So the null hypothesis mu is six. 2334

Now, if the alternative mu is 8, it will be, maybe about here but if the real alternative population is actually2337

a 10, so the other alternative, a 10, it is way out here. 2350

And which of these scenarios is the power larger. 2359

Well even if we set a very conservative critical test statistic, here is our power for 8 as is the true number of2365

side effects but here is the power almost 100% for 10 being the true number of side effects and remember 2382

I am just trying these with just some standard error I do not care what it is just have to be the same across all of them. 2395

And so here we see that wow, it is way out farther, more of this is going to be covered when we reject the null. 2402

And so we see that the power is larger, is the true number of side effects is 10. 2413

And the reason for that is because this is really a question about effect size. 2420

The true certain distance between our null hypothesis distribution and our alternative hypothesis distribution. 2428

We know that as effect size goes up power also goes up easier to detect but we cannot do anything we cannot actually make effect size bigger. 2440

Example 3. 2455

Why are both the Z and T statistic affected by N while Colens D and hedges G are not then what do the Z, 2458

T, D and G all have in common and finally, what commonality does Z and D share. 2469

What commonality does T and G share? 2478

Well, I am going to draw this as sort of Ben diagram. 2481

So let me draw Z here and here, I will drop T and then here I will draw D and it is going to get crazy, here I will draw G. 2484

Now, if it helps, you might want to think about what these guys mean over the actual population, standard 2511

deviation over the standard error derived from the population standard deviation. 2523

And here we have standard error derived from the derived from the estimated population standard 2538

deviation whereas in D we have the distance, same distance, here just divided by Sigma, here we have the same distance divided by S. 2547

Okay so why are both the Z and T statistic affected by N while Colens D and Hedges G are not? 2570

Well, the thing that these two have in common is that these are about standard error and standard error is 2579

either Sigma divided by square root of N or S divided by square root of N and it is this dividing by square 2587

root of N that makes these two so affected by N. 2602

And so it is really because they are distances in terms of standard error. 2607

So when do the Z, T, D and G all have in common so that is that is the little guy right here, what do they all have in common? 2614

Well they all have this thing in common. 2627

So they are all about the distance between sample and population. 2629

So it is all about that distance. 2641

Some of them are in terms of standard error and some of them are in terms of population standard deviation. 2644

So what commonality does Z and D share. 2651

Well that going to be right in here. 2656

What do they have in common, they both rely on actually having Sigma. 2658

T and G both rely only on the sample estimate of the population standard deviation. 2663

So looks a little messy but hopefully this makes a little more sense. 2671

Thanks for using educator.com for effect size and power.2676

Hi, welcome to educator.com.0000

We are going to talk about F distributions today.0002

So first we are going to review other distributions recovered besides F, namely the NT.0004

Then we are going to introduce the F statistic also called the variance ratio.0011

Then we are going to talk about the distribution of all these S, distribution of all these ratios and finally 0024.5 what Alpha and P value mean in an F distribution.0017

Because eventually were in a deep hypothesis testing with the F statistic.0029

Okay , first, these other distribution so we know how to calculate the Z statistic and we also know how to 0031

find the probability of such V value in a normal distribution.0044

But what is EZ distribution?0050

Well, imagine this.0053

Take a data set, let us just call it a population.0056

We take a data set, I will just draw a circle and we take some sort of sample from it, of size.0059

And we actually calculate the Z statistic for this sample so we calculate the goals, get the mean of this little0068

sample minus the mu divided by the standard error.0087

So you do that and then you plot the Z.0093

So imagine you replace all those that sample again to with replacement and you draw another sample and 0098

you do this again and then you plot that guy and you dump it back in, you draw another sample, you calculate Z.0115

So you do that over and over again many times which end up getting is a normal distribution overtime.0123

So many times if you plot Z you get a normal distribution and because of that we also call this a Z 0136

distribution because the distribution made up of a whole bunch of Z and it has the shape of a normal 0150

distribution so that is what we call a Z distribution.0159

Now, if you take that same idea and you do it, you get a sample, and instead of calculating Z for that simply0162

you calculate T, if you do this then and then you plot that and you do that over and over and over and over again you get a T distribution.0175

And this resulting t-distribution follows the rules of the t-distribution where it depends on the degrees of0195

freedom, how wide it is, at the lower your degrees of freedom assertive variable but the higher the bigger 0209

your degrees of freedom assertive, less variable and more normal it looks.0217

And so that is what we call the t-distribution.0222

So that is how Z statistic and the Z distribution sort of go together.0225

And this is how the T statistic and the t-distribution sort of go together.0232

And they just have to imagine taking a whole bunch of this sample, calculating whatever statistic and 0237

implying that statistic and then looking at the shape of those statistic.0245

So really what this is a sampling distribution of Z.0250

And this is a sampling distribution of T, instead of using means or Z squares to plot your plane instead use the T statistic.0260

And you could do that for anything you could do that for standard deviation and you can do for inter 0281

quartile, you can make the sampling distribution of anything you want.0286

That is important to keep in mind as we go into F distribution.0290

Okay so first thing is what is the F statistic?0293

We know how to calculate the T statistic and the V statistic, what is the F statistic?0301

Well, later on in these lessons were going to come across what we call the ANOVA, the analysis of variance.0307

Analyze means to break down and variance is well, you know what variances is, the spread of usually 0316

around the mean of your data set and so when we analyze variance, we are going to be breaking down 0325

variances into its multiple component and the F ratio happens to be ratio of those component variances.0332

And so I just want you to get sort of the big idea behind the F ratio not exactly how to calculate it, well get0343

into the details of that later on but the general concept.0352

So the S statistic usually is this idea that we have let us say two samples, x1, x2, x3, y1, y2, y3.0356

Now there is always some variation within the sample within exit there is some variation.0370

And within the Y there are some variation.0379

So there is definitely some variation but there is another variation here that we are really interested in.0385

We are really interested in the difference between these two things.0392

Between these two samples, so the F statistic really is taking those ideas and turning it into a ratio and here is what a ratio looks like.0397

It is really the between sample variance all over the within sample variance.0408

I remember variance is always squared, the average squared distance away from the mean and so because 0425

of that this is a squared number, this is the square number, they are both positive so this number is always going to be greater than zero.0433

There is no way that this number could be less than zero so the statistic is always going to be greater than zero.0442

Now another way to think about between sample variance and within sample variance is this.0449

Whenever we do these kind of test, we are really interested in the differences between the samples like that is really important to us.0454

But sometimes their difference is also like a part of that difference is going to be just inherent variation.0464

So sometimes there might be a difference between let us say,men and women, or people who got a 0478

tutorial versus people who did not, right?0486

People who study for the test versus people who did not, people went to private school versus people with public school.0488

There might be some difference between them?0495

But that difference is also going to have variation.0497

So this between sample variance often has inherent variation just variance you cannot do anything about 0500

inherent variation plus real difference the effect size between samples.0508

And noticed that we keep using this word between and that is to indicate that part, so between, that is the part that we are really interested in.0520

Over within sample variance and so here there is inherent variation between X and between the Y and that 0534

is not something we are interested in but it is good to know how variable are in the our little samples are.0557

Everyone very similar to each other, is very different, we need to compare the difference between the sample to the difference within the samples.0565

So this the inherent sample of the within sample variation is just inherent variation.0574

So these are all different ways of seeing the same thing and the reason why I want to say I also like this is0583

because later on we are not just going to be talking about between sample and within simple differences, we are going to add onto those ideas.0593

The final way I want you sort of think about the F statistic is basically this.0601

Ultimately in hypothesis testing, where going to want to know about differences between sample, that is the thing that were really interested in.0608

So it is going to be the variation that we want to explain because that is the reason that we did our research in the first place.0616

All versus the variation we cannot explain, not with this design at least.0631

So in our experimental design we will have these two groups and hopefully these groups will be similar to 0646

each other but different, similar within the group but different between the groups.0653

And that is why in a S statistic we want this variation that we want to explain to be quite large and this 0660

variation that we cannot explain or do anything about to come along for the ride where we want that to be relatively small.0667

Okay so let us do a limited thinking about the F ratio.0676

Now if we had a very big difference between the groups what kind of F ratio would we have?0679

When it is greater than one, less than one?0688

Well if our variation between the groups is bigger than the variation within the group then we should have 0690

a very large F so that should be F that is greater than one right so at least greater than one but maybe a lot 1144.0 greater than 1, it could be 2 over one or 2 over .5.0697

Any of those values which show between sample variances are a lot larger than within sample variance.0708

And so if there is a lot of within sample variance then that competes with the between sample variance so 0715

let us say there is a vague between sample difference but there is also a lot of differences within the 0728

samples themselves and sort of evens out and you might see an F that is smaller or even less than one right if this one is bigger than this one.0734

So that is how you could sort of think about the S statistic.0745

Now imagine getting that F statistic over and over and over again from the population and plotting a sampling distribution of S statistics.0748

What would you get?0761

Well, remember that F cannot go below zero because both numbers are going to be positive so the F really stops at zero.0763

But this is what the S statistic ends up looking like.0774

This is a skewed distribution and it has a positive tail.0778

That means it goes for a really long time on the positive side.0786

Its one-sided so it is not is not symmetrical, it is actually asymmetrical there is only a positive side and it is0792

because of the proportion of variances and variances are positive.0803

And like T is a family of distribution and you are going to be able to find the particular F distribution you are0810

working with by looking at the degrees of freedom in the numerator, the one about between sample 0819

differences and by looking at the denominator the sort of leftover or within sample differences variation.0829

So you are going to need both of those numbers in order to find out which S statistic you are working with 0847

and in Excel, it will actually ask you for the degrees of freedom for the numerator and denominator.0854

Now let us talk a little bit about what Alpha means here.0861

Alpha here, it will still need a cutoff point so critical F instead of a critical T or Z.0866

You will still need a critical F and the Alpha will still be our probability of making false alarm given that the null distribution is true.0877

This is the null F distribution just saying.0890

And the Alpha would be the same thing the probability of false alarm.0894

So once you know what that alpha sort of have, how you sort of picture that Alpha, let us talk about what that Alpha actually means.0899

If you go back to the original idea for that alpha the original idea is that cut off level.0910

So it is our level of tolerance for false alarms.0924

How the probability, the false alarm probability that we will tolerate and that is what we want.0930

We want Alpha to be very low.0945

Now our Alpha will be low, that is the smaller Alpha than this one, our Alpha will be low if our critical F is very big.0948

And what does it mean for F to be large?0962

This means our between sample variation variability is greater than our within sample variability.0964

And that is what it means and so as long as this is much larger than this, we have a large F and that is going0984

to mean a smaller a smaller chance of false alarm.0992

Now the Alpha is the cutoff level that we are going to set as the significance, the level that we will tolerate.0998

So what is the P value?1007

So the P value will be given our samples F, this is the probability that we would get this F or higher by chance in this probability.1009

So given our samples F actually will be easier so the idea is the probability, the false alarm probability for F1030

values, F statistics are equal to or more extreme than our sample, than the F from our sample.1058

So the probability that we would get an F greater than the one that we got so F from the sample.1080

So this is the F value once we have our sample statistic, this is the probability of false alarm that were willing to tolerate.1087

So it is the same idea as T statistics, the alpha, the P value and T statistics, we are just now applying it to a slightly different looking distribution.1101

Now examples.1112

Why does the F distribution stop at zero but go on in the positive direction until infinity?1117

Well, we know why it stops at zero.1122

The F distribution is a ratio of two positive numbers and we know that they are positive because variance squared, thus making it always positive.1125

But it goes on until infinity because there is no rule that says you can only be this much bigger in the 1148

numerator than denominator so the numerator can be like infinitely as big as the denominator who could go on forever and ever.1159

Example 2, in an F test also called the one-way ANOVA which we are going to talk about in a little bit, the P1168

value, you did an F test and the P value is .034, what is the best interpretation of this result?1177

It is plausible that all the samples are roughly equal.1186

So here we are thinking about let us say two sample and we need this versus this.1191

So the F value is between variation over within variation and if we have a big F value, if we have a big 1203

enough F value, so sample F then we can have a small P value .034.1229

So is it possible that all the samples are roughly equal?1239

No because we seem to have a large enough between sample variance so I would say no to that one.1247

It is possible that all the sample variances are roughly equal.1256

Well, that also is not necessarily what this means it could be that these within variations are very similar to1261

each other but that is not what this P value is talking about.1269

The within sample variation is much larger than the between sample variation.1272

Well, it is true we would have a small F instead it is this one.1278

The between sample variation is much larger than within.1283

So D is our answer.1286

Example 3, consider the height of the following pairs of samples.1288

Which will have the largest F.1295

Which will have the smallest F.1297

Okay let us think about this.1299

So players from NBA team Lakers versus adults in LA.1301

Well, if we draw those two population, Lakers versus LA.1306

This probably has a lot of variance, a lot of variance here, that is a lot of people, this probably have a very1313

small variance but there is probably pretty sizable difference between those two groups of people right like1321

average adult versus like the Lakers were probably all amazingly tall.1330

Well so that is the picture here.1335

Will this have a larger, will this have a smaller.1338

Well, what about adults in San Francisco versus adults in LA.1341

Well, this 2 probably both have a lot of within sample variation there's lots of adults in San Francisco, lots of1348

adults in a LA, they are all different from each other but their average just should probably be similar, it is1355

not like San Francisco's no pursuit for tall people or LA is no pursuit for tall people so this difference 1362

between the groups will probably be very small but the within group variability will be very large so I would1368

guess this would have actually a pretty small F, and what about this one.1375

This one is players from an NBA team Lakers versus players from another team and so here we might think 1381

Lakers, Clippers, and there is probably a pretty small variation here probably everybody is like about 6 feet1393

tall, and so they are probably all like super tall so there is not a lot of variation but there also probably similar across the teams to.1401

So because probably the average height on the Lakers is probably similar to the average height on the 1416

Clippers just that they are both tall groups of people so which one of these will probably have the largest F?1423

I think the biggest difference between the groups might actually be this one.1430

So I would guess I would go at this one given that I am not really sure about the variance here.1436

The variance is smaller but I am not sure how to compare these so far.1447

So this is the largest F and I am just going to go by having the largest numerator for sure.1452

Well, which will have the smallest F?1460

As in the smallest F would probably go at this one because not only does it have a small numerator but it 1464

has extremely large denominator so I would say this one would definitely have the smallest F.1472

So that is the end of F distribution.1478

See you next time for ANOVAs on educator.com.1483

Hi, welcome to educator. com. 0000

We are going to talk about ANOVA with independent samples today. 0002

So first we need to talk a little bit about why we need to introduce the ANOVA. 0005

We had been doing so well at t-test so far. 0011

Well, there are some limitations of the t-test and that is why we are going to need an ANOVA here. 0013

An ANOVA is also called the analysis of variance and the analysis of variance is really also could be thought of as the omnibus hypothesis test. 0020

So still, hypothesis test just like the t-test but it is the omnibus hypothesis test, we are going to talk what that means. 0032

We are going to need to go over a little bit of notation in order to break down with the ANOVA details. 0041

And then were really going to get to the nitty-gritty of partitioning or analyzing variance like 0047

getting down of breaking apart variance into its component parts. 0055

The we are going to build up the S statistics made up of those bits and pieces of variances and 0059

then finally talk about how that relates to the F distribution and hypothesis testing. 0066

Okay so first thing, the limitations of the t-test. 0071

Well here is a common problem like I want to know this question. 0077

Who upload more pictures to facebook? 0083

The Latino users, white users, Asian users or black Facebook users? 0086

Which of these racial or ethnic groups uploads more pictures to facebook? 0091

Well, let us see what would happen if we use independent samples t-test? 0098

What would we have to do? 0101

Well we have to compare Latinos to white, Latinos to Asian, Latinos to black and whites and Asians and whites and blacks and Asians and blacks. 0104

As like, all of a sudden we have to do 6 different independent samples t-test. 0111

That is a lot of tiny, tiny little t-test and really the more t-test you do that increases your likelihood of type 1 error. 0118

Previously, to calculate type 1 error we looked at one minus the probability that you would be 0127

correct, so one minus the probability of being right and that was to me like . 05 let say, right? 0135

But now that we want to calculate the probability of type 1 error for six t-test we have to think 0144

back to our probability principles but really I just want to look something like this. 0152

One minus whatever your correct rate is to the sixth power and that is got to be a much higher, 0157

much higher type 1 error rate than you really want. 0167

So the problem is that the more t-test you have, the more the bigger the chance of your type 1 0174

error and even non-mathematically you could think about this. 0181

Any time you do a t-test you could reject the null, every time you reject the null you have the 0186

possibility of making a type 1 error and so if you reject the null six times then you have increased 0193

your type 1 error rate because your just rejecting more null hypotheses. 0201

So you should know there are two major limitations of having many many tiny tiny little t-test. 0206

So you have six separate t-test, one is the increased likelihood of type 1 error and that is bad. 0213

We do not want a false alarm but there is a second problem, you are not using the full set of data in order to estimate S. 0220

Remember how before we talked about how estimate of the population standard deviation? 0231

Well, it would be nice if we had a good estimate of the population standard deviation and you 0237

know when you have a better estimate of the population standard deviation? 0242

When you got more data rate when you do a t-test for instance with Latinos than white people 0246

then you are ignoring your luscious and totally usable data from your Asian and black American 0253

population so that is a problem you are ignoring some of your data in order to estimate S and 0260

your estimating S a bunch of different little time instead of having one sort of giant estimate of S 0267

which would be a better way to go so both of these are major limitations of using many many little t-test. 0274

So back in the day statisticians knew that there was this problem Ronald Fisher came up with a 0282

solution and his solution is called an F test for Fisher. 0291

You think of a new statistic you could name it after you self. 0296

So he thought of something called an F test but this F test also includes a new way of thinking 0302

about hypotheses and so the F test could also be thought of as an omnibus test and the way you 0308

could think about them is like the Lord of the rings ring idea. 0315

It is one to rule them all instead of doing many many tiny tiny little test, you do one test to 0319

decide once and for all if there is a difference. 0326

And because you have this one test you need one null hypothesis and here is what that null hypothesis is. 0329

You need to test whether all the samples belong to the same population or whether 1 at least 0337

one belongs to a different population because remember the null hypothesis and the alternative 0346

hypothesis they have to be like two sides of the same point so your null hypothesis is that they are all equal. 0351

The mu’s are all equal. 0359

They all came from exactly the same population. 0360

The other hypothesis the alternative hypothesis is that they are not all equal but let us think about what that means. 0363

That means at least two of them are different from each other mean that all of them are 40372

different from each other, that means at least one guy is different from one of these guys. 0377

That is it ,that is all it means, that is all you can find out. 0382

So let us consider this situation let us say you have these three samples. 0386

Your null hypothesis would be that they all came from the same population. 0392

A1 A2 and A3 all the same population A but if we reject that null hypothesis what have we found out? 0399

What we found out that at least two of them differ at least, all three of them could differ from each other or it could just be 2. 0413

It could be that A1 and A2 are the same, the A3 is different. 0421

It could be the A2 and A3 are the same but A1 is different. 0425

It could mean that A 1 is totally different from A2 and that is totally different from A3. 0428

Any of those are possibility so here is a good thing. 0433

The good thing about the omnibus hypothesis is that you could test all mentioned things at ones. 0436

That they all come from the same population, you could test that big hypothesis at ones. 0442

The bad thing about it is that if you reject the null it still did not tell you which populations differ. 0446

It only tells you that at least one of the valuations is different. 0454

So when you reject the null, it is not quite as informative but still it is a very useful test. 0459

So we need to know some notation before we go on. 0466

An analysis of variance so analysis of variance, that is why it is called the ANOVA so sometimes 0471

you might do with the opening little ANOVA notation, you want to analyze the variance so when 0479

we want to analyze the variance we have to really think hard about what variance means. 0486

And variance as sort of the average spread around some means so how much spread you have. 0492

Are you really tightly clustered around the mean or you like really just burst around the mean. 0500

Okay so first things first, consider all the data that we get from all the different groups. 0505

That is why we have to regroup all the data from all the different groups, and a lot of variance 0511

around the grand mean and the grand mean is a new idea. 0518

The grand mean it is not just the mean of your sample but the grant mean is the mean of everybody lock together. 0521

Pretend there are three groups pretend there is just one giant group that all three data sets have been sort of award into. 0528

What is the meaning of that giant group? 0536

That is called the grand mean and so for instance, here is our sample. 0538

Our sample from A 1 our sample from A2, our sample from A3, and when you have sample means here is what the notation looks like. 0544

It should be pretty familiar, X bar sub A1, X bar sub A2, X bar sub A3. 0552

Now when we had a grand mean, we do not have three of them we just have one because remember, they are all lock together, right? 0564

How do we distinguish the grand mean if we just say X bar we might confuse it for being a 0571

sample instead of grand mean right and so in order to think grand mean this is the mean of all 0579

the means, mean of all the samples right, we call it X double bar and that is how we know that it 0585

is the grand mean so that is definitely one of the things you need to know. 0592

So now let us talk about partitioning or analyzing the variance. 0596

When we are analyzing variance, what we want to start with is the total amount of variance. 0606

First, we got so have the big thing before we jump in apart. 0614

So what is the big thing, the big variance in the room is total variance and this is the variance 0617

from every single data point in our giant pool around the grand mean. 0625

And we can actually just sort of think about how to write this as a formula just by knowing grand 0629

mean as well as the variance formula right and so variance is always squared distance away from 0639

the mean divided by however many data points you have to get average square distance from the mean. 0645

Now we want the distance away from the grand mean so I am going to go ahead and put that 0653

there instead of X bar I have X double bar and put my data points so that would be Exabyte. 0659

And we want to get the sum of all of those and then divide by however many data points we have. 0668

Usually N means the number of data points in a sample. 0676

How do we tell the N of everybody of all your data points added together? 0682

Here is how you, you call it N sub total. 0688

And in this says it is not just the end of one of our little sample because we have three little 0691

sample, I mean the N of everybody of the total number in your data set. 0698

And so even this Exabyte I do not really mean just the axis in sample 1, I mean every single data 0704

point so I would say I goes from 1 all the way up to N total. 0713

Sorry this is a little small, N’s of total appear and so this will cycle through every single X, every 0719

single data point in your entire sample lumped together. 0729

Get their distance away from the grand mean, square it, add those squared distances together 0732

divide by N so this is just the general idea of variance. 0741

Average where distance from the mean. 0747

In this case, retirement grand mean and so how do we say total variance? 0750

Well it would be nice if we could say like, oh this is something subtotal, right? 0757

Before we go on to variance though I just want to stop here before we go into average variance, 0765

I just want to talk about this thing, what is this thing? 0773

And so let us talk about some of variance request so variance is always going to be the sum of 0777

squared distances, sum of squares divided by N or if you are talking about S, S squared is the sum 0784

of squared distances over N minus 1 and another way of saying that is SS over degrees of freedom. 0795

So we are just going to stop here for a second and just talk about this sum of squares and we are going to call that sum of squares total. 0805

So that sum of squares total and that is going to be important to us because later we are going to 0817

used these sum of squares, these different sum of squares to then talk about variance. 0824

It is the squares are very unrelated to the idea of variance. 0830

Now we have this total variance because this is really the idea of how much you are varying. 0834

We have this total variance and were going to partition it into two types of variance. 0840

One is within group variation and the other is between group variations. 0845

So we have 3 groups, the between group variance is going to look at how different they are from each other. 0850

The with in group variance is just going to look at how different they are from their own group, 0860

how different the data are from their own group and that is going to be important because this 0865

sum of squares total actually is made of sum of squares within plus sum of squares between. 0871

So because of this idea we can really now see, where taking us all variance and partitioning it 0882

into within group variance between group variance or between sample variance. 0892

So first things first, within group variance. 0899

How do we get an idea of how different each sample is from itself. 0902

Well the very idea is just like what we have been talking about before. 0912

This is each samples variance around their own mean and we already know the notation for this mean. 0917

So that would be something like how much does everybody in sample A1 differ from the mean of 0938

A 1 so what is that different to getting the sum of squares. 0947

And what is the variance, the sum of squares for everybody in A2 square and the same thing for everybody in A3. 0954

So this is sort of the regular use of variance that we used before regular use of sum of squares that we have used before. 0971

Just looking at each sample variance from its own sample mean. 0977

Now how do we get between group variance? 0982

Between group variance is going to be each samples mean, how much does it very from the 0986

grand mean, difference, squared difference from grand mean so there is some grand mean and 0999

how much does each sample mean differ from that grand mean. 1011

And so that is going to be between group variation. 1016

How much do the group differ from that grand mean. 1020

So first of all let us just review variance and sum of squares. 1024

So sum of squares is the idea that were in use over and over again and it is just this idea that 1033

yours summing the sigma sign, sum from X bar squared. 1043

So it is just basically that the squared distance away from the mean and add them up. 1052

That is sum of squares. 1061

Now what we are doing is we are sort of swapping out this idea of the mean for things like grand 1063

mean, sample mean, and were also swapping out what our data points are. 1071

Is this like from N total, is it from all of data points, is it just the end from the sample, is it the group means? 1082

So were swapping out these two ideas in order to get our sum of squares total, sum of square 1098

between or sum of squares within but it is always the same idea. 1106

Sum of distance, squared, add them up. 1110

Okay now so what is variance in relationship? 1113

Well variance is the average squared distance and so in doing this we always take the sum of 1116

squares and we divide by however number we own but how many data points we have? 1130

But often where using estimates of S instead of actually having the population standard deviation. 1139

So were going to be using degrees of freedom instead of just N and we have different kinds of 1146

degrees of freedom for between and within group variation so watch out for them. 1153

Okay now let us go back to the idea of the F statistic. 1162

Now that we have broken it down a little bit in terms of what kind of different variances there 1167

are, hopefully the F statistic makes a little more steps sense. 1171

The idea is that you want to take the ratio of the between group or sample variance over the 1175

within group variance and the reason we want this particular ratio is that were actually very 1187

interested in the between group difference that what our hypothesis test is all about whether the groups or difference are the same. 1197

The within group variation, we cannot account for. 1206

Its variation that just is inherent in the system and so we need to compare the between group 1210

variation which we care about with the within group variation we cannot explain we do not have 1218

any explanation for at least not in this hypothesis test, we have to do other tests to figure out that. 1223

Okay so now that were here we need to do is replace these conceptual ideas with some of the things that we have been learning about. 1230

In particular the variance between the variance within and so variance we are going to use S squared but S squared between over S squared within. 1242

So variance between over variance within but now we know a little bit like we have refreshed, 1260

what is variance about, how can we break it down in terms of sum of squares? 1266

Well, that is what we are going to do. 1272

We are going to double-click on this guy and here is what we see inside. 1276

We see the sum of squares between divided by the degrees of freedom between all over the 1280

sum of squares within then divided by the degrees of freedom within and this is how were going to actually calculate our S statistic. 1291

Now, we will write out the formulas for each these but it is sort of good to know like where the S 1301

statistics are comes from its conceptual route, you always wanted to be able to go back there. 1309

Because ultimately when we have a large F, we want to be able to say, this means there is a 1314

larger between group variation then within group relative to within group variation. 1321

A larger difference in the thing that were interested in over the variance that we have no explanation for. 1327

Okay so now let us figure out how to break down this idea and remember this idea really is the breakdown of the variance between. 1332

So were breaking down the broken down thing. 1343

So conceptually what is this? 1347

Well, conceptually this is the difference of sample mean from the grand mean so imagine our 1350

little group and their sum grand mean that all of these guys contributed to but this all have a little sample mean of their own. 1357

What I want to do is know the difference between these, squared, then add them up. 1376

That is the idea behind this. 1384

So first of all how many means do we have how many data sets do we have, how many data points do we have? 1386

Well we have a data point for every sample that we have so how many means do we have? 1395

Or how many samples do we have. 1402

We actually have a term for that. 1404

The above letter that we reserve for how many samples is K, number of samples. 1406

And so that you could think about okay if that is the number of samples then what might be the degrees of freedom here? 1415

Well, just going to be K -1, here is why. 1427

In order to get the grand mean we could do weighted average of these means and since there 1434

are three of them if we knew what two of them were in advance the third one would not be free 1442

to vary, we lockdown with that third one. 1449

So the degree of freedom is K – 1. 1451

Okay so what is the actual sum of squares between and know you need to take into 1454

consideration how many actual data points are in each group. 1463

For instance, group one might have a lot of data point or two might only have a few data points which means should matter more. 1468

Well that can be taken into account. 1476

So first things first, how do we tell it get the difference between this mean and this mean. 1479

That is going to be this. 1486

X bar minus X double bar so get them the difference between the mean and the grand mean. 1489

Now we several means here so I am going to put an I for index and in my sum of squares my I is 1497

going to go from one up through K so for each group that I have I want you to get this distance and square it. 1507

Not only I am going to stop there but I also want you to make it count a lot if it has a lot of data 1515

points so if this guy have a lot of data point he should get more votes, his difference from the 1526

grand mean should count more than this guys different and so that is what we get by multiplying 1531

by N if N is very large, this distance is an account a lot if N is very small, this distance is not going to count as much. 1538

And this is the sum of squares between so that is the idea. 1546

Okay so now we actually know this and this so we could actually create this guy but putting these two together. 1554

Now let us talk about sum of squares within now that we know sum of squares between pretty well. 1563

Well, first thing we need to know is that this idea sum of squares within divided by degrees of 1582

freedom within is actually going to give us the variance within. 1587

Let us talk about what this means conceptually. 1593

This means the spread of all the data points from their own sample mean. 1596

So this is the picture I want you to think of. 1604

So everybody has their own little sample mean, X bars, own little sample mean and here are my 1610

little data point and I want to get the distance of each set away from their own sets mean. 1620

This is going to give me the within group variation. 1629

Well, we need to think about first, how many data points do we have? 1635

Well we have a total of N total, because you need to count all of these data points you need to add them all up. 1643

The total number of data point. 1652

So what is the degrees of freedom? 1656

Well, it is not just N total -1. 1659

How many means did we find? 1661

We found three means, for each time we calculate a mean, we loss a degrees of freedom so it is 1663

really the N total minus the number of means that we calculate and here, it is 3, it is 3 because we have three groups. 1674

Remember, we have a letter for how many groups we have, and that is K so it is really going to 1684

be N total minus K the number of group and that is going to give us the degrees of freedom within. 1689

So what is the sum of squares within? 1698

The sum of squares within is really going to be the sum of squares here plus the sum of squares here plus last the sum of squares here. 1701

So for each group just get the sum of squares. 1713

That is a pretty easy idea so the sum of squares within is just add up all the sum of squares. 1718

Now what it this I mean? 1728

I means the sum of squares for each group and that is I going from one to K so for however many 1730

groups you have get that group sum of squares added to the next group sum of squares added to 1740

the next group sum of squares and these are general formulas that work for two groups three 1746

groups four groups, so that is sum of squares within and now that we know this and this, we could calculate this. 1751

So now let us put it all together all at once. 1764

My apologies because this may look a little bit tiny on your screen but hopefully you could sort of 1770

reconstruct it from when you seen before because I am writing the same formulas just in a 1781

different format just to show you how they all relate to each other. 1786

So first conceptually this is always important because you can forget the formula but do not 1789

forget the concept because from the concept you could reconstruct the formula. 1796

It does take a little bit of mental work that you can do. 1800

So first things first, the whole idea of the F is the between group variation over the within group variation. 1803

So that is the whole idea right there and in order to get that we are going to get the variation between over the variability within. 1817

Actually, I wrote this in the wrong place, should have written it down in the formula section. 1831

So F equals the variability between divided by the variability within. 1839

So that is the F. 1852

Now for the F you cannot just calculate the sum of squares because really, the F is made up of a 1856

bunch of squares and for F you actually need 2 degrees of freedom and that is going to be 1861

determined by the between group degrees of freedom in the within group degrees of freedom. 1865

So these I am just going to leave them empty. 1871

Now let us talk about between group variability. 1873

The big idea of this is that this spread around of sample means, around. 1876

So gonna put S there of ex-bars around the grand mean. 1891

That is what we are really looking for, that idea of this spread of all the sample means around the grand mean. 1897

However the within group variability is the spread of data points from own sample mean. 1904

So for each little group, what is the spread there? 1920

So that is the idea of these two things. 1923

Now in order to break it down into the formula you first wanted to get into what is S squared 1928

between, so if you double-click on that, that takes you here, you double-click on this one, it will take you here S squared within. 1935

So the variance between the between group variability, this is going to be just the very basic idea of variance. 1943

Sum of squares over degrees of freedom. 1955

Same thing here, sum of squares over degrees of freedom. 1958

That stuff you already know but the only difference is with a little between here and with a little within here so that is only difference. 1963

Once you get there then you could break this down right and you could say sum of squares 1973

between and if you forget what the formula is, you can look up here, spread of ex-bars around the grand mean. 1978

So X bar minus grand mean. 1988

You know you have a whole bunch of them, sum of squares and you are going to go from 1 up 1990

through K that is how many sample means you have. 2000

And you wanted to be waited. 2005

You wanted to count more your distance counts more if you have more data points in your 2008

sample and then the degrees of freedom is fairly straightforward. 2016

It is the number of the means -1 because when you find out your grand mean it is going to limit 2021

one of those guys so your degrees of freedom is lessened by one. 2030

So for sum of squares within, let us go back to this idea that spread of all the data point away 2034

from their own sample mean and that is just going to be all those sum of squares for each little 2043

group and you already know from for that, added together. 2051

So I goes from one up to K. 2055

And the degrees of freedom is really just this idea that you have all these points, all this data 2058

points and total minus however many means you found because that is going to limit the 2072

degrees of freedom for those data point and that is K. 2080

One another thing I want to just say right here, it is just this idea that you might see in your 2083

textbook or in a statistics package this idea called mean squared error. 2091

So this term right here is sometimes going to be called the mean squared error term so that a common thing that you might see. 2099

This may be called mean squared between or you might just see the mean square between 2112

groups or something like that so between group start might be written out. 2126

But almost always this denominator is going to be called mean squared error. 2130

The reason I want to mention it here is not only to connect this lesson with whatever is going on 2135

on your classes but also because mean squared error will be an important term for later on when 2142

we are going to other kinds of ANOVA. 2148

So now let us get to examples. 2151

So first who uploads more photos? 2156

People of unknown ethnicity Latino Asian black or white Facebook users. 2158

So what are null hypothesis and sorry, you might be like, how will I ever know? 2164

Is this data set found in your downloads? 2172

And so the download looks like this and there is however many uploaded photos so here is 2176

uploaded photos here so this person has uploaded 892 photos and their ethnicity is at zero. 2185

And zero is just a stand-in for the term unknown or blank so they may have left there blank. 2191

So the Latino sample is one, the Asian sample is 2, the black or African-American is three, the whiter European-American sample is 4. 2198

And so you can look through that data set, I kind of recorded that just so that we can easily through see where we are. 2210

Okay let us start off with our hypotheses. 2217

On this hypotheses the hypotheses to rule them all right the null hypotheses should say that all 2222

of these ethnicities and even unknown are all the same when it comes to uploading photos. 2231

So our mu of ethnicity zero, occultist zero, occultist 1,2,3,4 only because that is what is also in the data set. 2239

The mu of ethnicity zero, the mu of ethnicity 1 equals the mu of ethnicity 2, the mu of ethnicity three, equals the mu of ethnicity 4. 2251

So we could say this in order to, say look they are all the same mathematically. 2265

So this is how you write out that idea of they are all the same, they all came from the same population. 2276

The reason we want to use E0 E1 E2 is just that it is going to make it a lot easier for us to write 2281

the alternative hypotheses and this also helps us keep in mind why are we comparing the different groups. 2291

What is the variable they will differ on and the variable is ethnicity and they all differ on that 2298

variable they will have different values of it and that the between subjects variable so at least in 2307

our sample people are either Latino or Asian or black or white although they can be both, just not in our sample. 2315

So the alternative hypotheses is that the mu sub E are not all the same, not all equal. 2323

We do not actually put does not equal because we know whether it is easy to that are not equal 2346

or these two that are not equal or this one and this one that is not equal right. 2363

So we do not make those claims and that is why you do not want it right those not equal ones 2367

you want to just write a sentence that the means are just not all the same. 2371

Now at the site in a significance level, just like before let us decide on a significance level of . 05, it is commonly accepted. 2376

And because we are going to be calculating an S statistic, were going to be comparing it to disc alpha. 2384

So it is always one tail, always only on the positive tail and so this is the F distribution. 2397

Okay now let us talk about the decision stage so in the decision stage you want to draw the F distribution, just like I did so here is alpha, here is zero. 2404

We need to find the critical F but in order to find the critical F we actually need to know the two 2419

different degrees of freedom because this distribution is going to be different based on those 2° of freedom. 2434

So we need to know the degrees of freedom in the numerator which in this case is the degrees of 2441

freedom between and the degrees of freedom in the denominator and that is going to be the 2448

degrees of freedom within, we could actually calculate that. 2457

The degrees of freedom between is K -1 and here our K is 12345, K equals 5, 5 groups so that can 2460

be a degrees of freedom of 4, and the degrees of freedom within is going to be N total minus K. 2473

And so let us see how many we have total. 2484

So we could do, we could just do account if you go down here I have actually sort of filled it in for 2488

you a little bit just so that it is nice and formatted, I used E1 2345 but that really mean, one of them should be easy zero. 2500

So K is five, we have five different groups, the degrees of freedom between is going to be 5-1, 2511

the degrees of freedom within, we are going to need to know the total number of data points we 2520

have so we need to count all the data point that we have. 2527

All these different data point minus K so here is K. 2531

So that is a 94 so apparently we have 99 people in our sample. 2541

So then we can find the critical F. 2547

Now ones we have the degrees of freedom between and the degrees of freedom within here just 2550

to remind you this is the numerator and this is the denominator degrees of freedom. 2555

Once we have that you can look it up in the back of your book. 2561

Look for the F distribution chart or table and you need to find one of the, either the columns and 2564

rows usually the columns will say degrees of freedom numerator and the degrees of freedom 2574

denominator and then you could use both to look up your critical F 5% or you can look it up in Excel. 2580

And the way we do that is by using F in because F discs will give you the probability, F in you put in probability and get the F value. 2594

So the probability is . 05, only one tail so we do not have to worry about that. 2607

The first degrees of freedom we are looking for is the numerator one and the second degrees of 2611

freedom we are looking for is the denominator one. 2615

And so when we look at that we see 2. 47 to be a critical F. 2620

So your critical F is 2. 47 and so we need an F value greater than that or a P value less than . 05 in 2633

order to reject our null hypothesis that they are all the same, all come from the same population. 2644

Okay so step 4 in our same question. 2650

We need to calculate the sample statistic as well as the P value so in order to calculate the 2658

sample statistic we need to calculate F because F is the only test statistic that will help us rollout our omnivorous hypothesis. 2666

Remember that is going to be the variance between over the variance within. 2675

And once we get our F, then we can find the P value at that F. 2681

So what is the probability of getting an F value that big or bigger given that the null hypothesis is true. 2688

And we want that P value to be very small. 2697

So let us go ahead and go to our example. 2700

Example 1 and here I have already put in these formulas for you but one thing that I like to do for 2706

myself is I like to tell myself sort of what I need and so I need this and then I break it down one 2715

row at a time, the next row is going to be assessed between over the degrees of freedom 2722

between and then I can find each one of those things separately and then I also am going to 2730

break down the variance within into the sum of squares within and degrees of freedom within and I break those down. 2736

Okay so first things first, I want to find the variance between but in order to do that I need to find 2743

sum of squares between and that is this idea that I get every mean, so I need the mean for every 2750

single one of these groups, for the mean for unknown, mean for Latino users for Asian users and 2758

so on and so forth and I need to find the grand mean. 2764

I need to find the squared distances between those guys. 2768

Okay so first, I need to know how many people are in the this particular sample. 2770

So let us find the count of E0. 2781

That is our zero ethnicity for unknown people. 2785

So I am going to account those people, and then I also going to count E1 and also going to count 2791

E2, I am also going to count my E3 and finally I am going to count my E4. 2807

Now these are the same data point that I am going to be using over and over again so what I am 2830

going to do is I am going to lockdown my data point. 2845

Say use this data whenever you are talking about the E subzero. 2848

Use this data whenever I am talking about E1. 2854

Use this data whenever I talk about E2 and use this data whenever I talk about E3, use this data when I talk about E4. 2862

Now the nice thing about this is that you could see that they almost all have 20 data points in each sample. 2879

The only one that differs is the unknown population the unknown ethnicity sample and they are just off by one. 2891

So, what is the meaning of the sample? 2900

One thing I could do is I could just copy and paste the cross but what I really want to do is I do 2904

not want to get the count anymore, I want to get the average. 2915

So once I do that I could just type an average instead of count, save me a little bit of work and I find all these X bars, X bars for 01234. 2918

Now let us find the grand mean. 2941

The grand mean it is going to be the means for everybody so that is going to be the average for every single data point that we have. 2944

And we really only need to find the grand mean ones. 2951

If you want you could just point to the grand mean, copy and paste that down it should be the 2962

same grand mean over and over again or you could just refer to this top one every single time. 2972

So now let us put together our N times the distance squared before we add them all up. 2978

So we have N times the distance X bar minus the grand mean, square that, and that is a huge 2990

number, variance and now we are going to sum them all up. 3004

Equal sign, sum and I want to sum all of this up. 3011

I get this giant number 8 million something. 3019

So huge number. 3023

So once I have that I can just put a pointer here. 3025

I just put equal sign and point to this sum. 3031

And that is really the sum of squares between. 3035

What about degrees of freedom between, have I already found that? 3039

Yes I have, I found it up here. 3047

So I am not going to calculate that again I am just going to point to it. 3049

Once I have these two now I can get variance between groups. 3054

So it is this divided by the sum of squares divided the degrees of freedom. 3060

We saw the giant number that it make sense if you take 8 millions something divide by 4 you get 2 millions something. 3067

It is still a giant number but is it more giant than the variance within? 3072

I do not know, let us see. 3080

So in order to find the variance within then I need to find the sum of squares within as well as the degrees of freedom within. 3082

So how do I find sum of squares within? 3087

Well, one thing I could do I could go to each data point and find mean, subtract each X from each 3093

mean, square it, add them all up, or I could use a little trick. 3101

I might use a little trick. 3107

So just to remind you. 3113

So here is my little trick. 3113

So remember the variance of anything, the variance is going to be some of squares divided by N-1. 3116

So if I find variance and I multiply it by N-1 I could get my sum of squares, I could do variance times N-1. 3129

I could use that trick if I use XL. 3146

So here is what I am going to do. 3152

I am going to find the variance. 3157

First it might be easy if I copied these. 3159

Just so that I do not have to go and select those. 3166

If I find the variance and then I multiply it by N -1, I get my sum of squares. 3171

I am just working backwards from what I know about variance. 3186

So I am going to do that same thing here and and get my variance and multiply it by N minus 1. 3189

Get my variance multiplied by N – 1, finally variance multiplied by N – 1. 3199

Obviously you do not have to do this, you could go ahead and actually compute sum of squares 3234

for each set of data but that would take up a lot of room and typically more time so if XL is 3243

handy to you then I really highly recommend the shortcut and then we will just want to sum all the guys up. 3251

That some of all the sum of squares and we get this giant number. 3258

We get 42 million, really large number. 3263

But our degrees of freedom within is also a larger number than our degrees of freedom between. 3279

And so if I find out my variance within then let us see. 3287

Is this smaller or bigger. 3295

Well we see that this number 450,000, that is the smaller number than 2 million so that is looking good for S statistic. 3297

So our S statistic is the variance between divided by the variance within and we get 4. 48 and 3312

that is quite a bit larger than our critical F of 2.46 and I have forgotten to put a place for P value 3323

but let us calculate the P value here so in order to calculate P value we put F discs and we put in 3334

the F value and the degrees of freedom for the numerator as well as the degrees of freedom for the denominator. 3343

And we get P = .002 just quite a bit smaller than .05. 3353

So that is a good thing so in step five we reject the null. 3362

How does which group is different or multiple groups are different from each other? 3366

We just know that the groups are not all the same that is all we know. 3374

Okay so we got a P value equals .002 so we rejected the null hypothesis. 3378

Here is the thing, remember at the end of this, we still do not know who is who, we just know that somebody is different. 3390

At the end of this what you wanted to do is, there is going to be like little paired t-test. 3398

They are often called contrast and you want to do that in order to figure out what your actual, 3405

which group actually differs from which other group not just whether some group differs from 3414

some other group and so you want to do a little bit more after you do this. 3420

This are called post hoc test. 3425

And in a lot of ways they are very similar to t-test were you look at pairs. 3427

There is one change, they change the sort of P value that you are looking for so but you wanted 3439

to do the post hoc tests afterwards and to do all the little comparison so that you can figure out who is different from who. 3446

But you are only allowed to do a post hoc test if you rejected the null hypothesis. 3452

So you are not allowed to do a post hoc test if you have not reject the null hypothesis that is why 3457

we cannot just get to the step from the very beginning. 3464

So first thing we need to do if you reject is do post hoc test. 3468

Something you need to do is find the effect size. 3472

In the case of an F test, you are not going to find like coens D or hedges G. 3475

You are not going to find that kind of effect size. 3486

You are going to find what it’s called Eta squared. 3488

Eta squared, it looks like the N squared. 3490

And eight is where it is going to give you an idea of the effect size. 3495

Now let us go to example 2. 3499

So also the data is provided in your download. 3504

A pharmaceutical company wants to know whether new drug had the side effect of causing patients to become jittery. 3508

3 randomly selected sample, the patients were given 3 mild doses of the drug. 3513

0, 100 200 mg and they were also given a finger tapping exercise. 3518

Does this drug affect this finger tapping behaviour? 3523

Will this one I did not format really nicely for you because I want to serve figured out as you go but do not worry I will do this with you. 3527

So first things first Omnibus hypothesis. 3536

And that is that all three dosages are the same so mu of dosage zero = mu of dosage 100 = mu of dosage 200. 3538

And the alternative hypothesis is that mu of the dosages are not all same. 3563

Okay step 2. 3575

Alpha = .05. 3579

Step three decision stage how do we make our decision to reject or fail to reject. 3581

First you want to draw that F distribution, put colour in that Alpha = .05, that is the error rate were willing to tolerate. 3591

Now what is our critical F? 3603

In order to find our critical F we need to know the degrees of freedom for between the degrees of freedom for within. 3607

So if you go to example 2 the worksheet for example 2, example 2 then you can see this data set. 3615

Now usually this is not the way data is set up that especially if you use SPSS or some of these other statistics packages. 3627

Usually you will see the data for one person on one line just like this. 3635

Just like example 1 the data for one person their ethnicity and their photos are on one line. 3641

You will rarely see this but you may see this in textbooks so I do want you to sort of pay attention 3649

to that but here and the problem is that different people were given the different dosages so you 3655

could assume each square to be a different person. 3660

So, were on step three decision stage and in order to figure out our critical F, we need to know 3663

the degrees of freedom between and degrees of freedom within, that is not so pretty anymore, 3677

this takes a long time to do that to put all the little fancy things in there but it is very easy. 3683

So degrees of freedom between in order to find that it would be really helpful if we knew the K, 3695

how many groups right and there are three groups, three different drug dosages. 3699

So it is K -1 degree of freedom 2. 3705

In order to find degrees of freedom within we need to know N total. 3710

How many total data points do we have? 3716

And we could easily find that in XL using count and selecting all our data point so there is 30 people 10 people in each group. 3719

So that is going to be N total minus K. 3730

That should be 27. 3736

Once we know that we can find our critical F and use F in probability of .05 degrees of freedom 3738

for the numerator is going to be degrees of freedom between, degrees of freedom for the 3747

denominator is degrees of freedom within and we get 3. 35 as our critical F. 3752

Note that this is a larger critical F than before when we had more data points. 3760

Like 90 data points in the other example and because of that brought down our critical F. 3767

Now let us go to step 4, step 4 we need to calculate the sample F as well as the P value. 3772

Let us talk about how you do F. 3784

Here we need the variance, variance between divided by the variance within. 3786

How do we find the variance between? 3795

Well that is going to be the sum of squares between divided by the degrees of freedom between. 3797

How do we find sum of squares between? 3805

Well remember, the idea of it is going to be the means for each group, distance from that mean 3809

to the grand mean, square that distance, weight that distance by how many N we have, and then add them all up. 3816

So in order to get that, up and down here what will we put in the other stuff to? 3826

The variance within, that is going to be the sum of squares within divided by the degrees of 3837

freedom within, just so I know how much room I have to work with. 3844

Okay so first they might be helpful to know which being were talking about, the dosage so it is 0, 3849

D0, D 100 and D200, those are three different groups. 3857

What is the N for each of these groups, what is the X-bar for each of these groups, what is the 3865

grand mean and then we want to look at N times X bar minus the grand mean, we want to 3872

square that and then once we have that, now we want to add these up and so I will put sum here 3883

just so that I can remember to add them up. 3894

Okay so the N for all of these are 10, we already know that, and let us find the X-bar. 3897

So this is the average of this and then the next one, it is the same thing, we know it is the same 3906

thing the average except for column B and the next one is average again, for column C for 200. 3922

How do we find the grand mean? 3934

We find the average, we could put a little pointer so that they all have the same grand means. 3937

Now we could calculate the weighted distance squared for each of these group means. 3952

So it is N times X bar minus the grand mean, squared. 3962

And once you have that you could just dragging all the way down and here we sum of these all up. 3970

We sum these weighted differences up and we get a sum of squares of 394. 3983

And we already know that degrees of freedom between group so we could put this in divided by this number. 3994

We get 197. 4006

Now let us see. 4009

It is not going to be bigger or smaller than the variance within right and in order to find the 4011

variance within it helps to just sort of conceptually remember, okay what is the sum of squares 4018

within, then the sum of squares for each of these groups from their own means. 4022

And so the sum of squares for each of these dosages are going to be, and I am just going to use 4029

that shortcut, the variance for this set multiplied by nine, that is N -1. 4041

And I am just going to take that and I am going to say do that for the second column as well as the third column. 4058

And once they do that I just want to sum these all up and I get 419. 4073

So now I have my sum of squares within. 4082

I divide that by my degrees of freedom within and I get 15.53 and even before we do anything 4085

we could see wow that variance between is a lot bigger than the variance within. 4095

So we divide and we get 12.69, 12.7 right and that is much larger than the critical F that we set. 4099

What is the P value for this? 4111

We use F disc, we put in the F value we got, we put in the degrees of freedom between, degrees of freedom within and we get a P value of .0001. 4114

So were pretty sure that there is a difference between these 3 groups in terms of finger tapping. 4131

We just do not know what that difference is. 4138

So step five would be reject null and once we decided to reject the null then you would go on to 4140

do post talk test as well as calculating effect size. 4153

So that is one way ANOVA with independent samples. 4157

Thanks for using educator.com. 4163

Hi, welcome to educator.com.0000

Today we are going to talk about repeated measures ANOVA.0002

So the repeated measures ANOVA is a lot like the regular one way independent samples ANOVA that we have been talking about.0004

But it is also a lot like the paired samples t-test and so we are going to talk about why we need the repeated measures ANOVA .0014

And we are going to contrast the independent samples ANOVA with the repeated measures 0022

ANOVA and finally we are going to breakdown that repeated measures at statistic into its component variant parts.0027

Okay so previously, when we talked about one-way ANOVA we talk initially about why we 0035

needed it and the reason why we need ANOVA is that the t-test is limited.0044

So previously we talked about this example, who uploads more pictures, Latino white Asian or black Facebook users? 0049

When we saw this problem and we thought about maybe doing independent samples t-test we realize we would have to do a whole bunch of little t-test.0058

Well let us get this problem.0066

It is similar in some ways but it is also a little bit different so here is the question.0070

Which prototype is most frequently used on facebook? 0076

Tagged, uploaded mobile uploads for profile pictures? 0079

Now in the same way that this has many groups, at this all the problem also has many groups, 0083

the one thing you could serve immediately tell us of that is if we try to use t-test we also have to use a bunch of little t-test here.0091

But here is another thing.0099

These variables are actually linked to one another.0102

Often people who have tagged photos have a number of uploaded photos who have a number of 0104

mobile uploads will also have a number of profile pictures.0110

So in this sense although these are made up of four just separate groups of users and the user 0114

here is the linked to any of the users and the other groups Latino, white, Asian, black groups, here we have these four sets of data.0121

Tagged, uploaded mobile or profile pictures but the number of tagged photos is linked to some 0135

number of uploaded photos probably because they come from the same person and maybe this 0146

person owns the digital camera that they really loving carry around everywhere.0153

So these scores in these different groups are actually linked to each other and these are what we 0158

have called previously dependent samples or we called them paired samples before because 0167

there were only two groups of them at that time but now we have four groups but we could still see that linked principle still hold.0173

So here were talking about were talking about different samples, multiple numbers samples 0181

more than two but these samples are also linked to each other in some way.0188

And because of that those are called repeated measures because we are repeatedly measuring something over and over again.0194

Measuring photos here measuring photos here measuring photos here measuring photos here and because that is called repeated measures.0204

It is very similar to the idea of paired samples except were now talking about more than two.0211

So 3, 4, 5 we call those repeated measures so we have the same problem here as we did here.0217

If we have a bunch of t-test of our solution is a bunch of t-test, we have two problems whether their paired t-test or independent samples.0225

So in this case they would be paired.0243

That even in the case of paired t-test of the same problems that we did before, the first problem 0247

is that with so many t-test the probability of false alarms goes up.0254

So this is going to be a problem.0258

And it is because we reject more null hypotheses every time you reject a null hypotheses you have a .05 chance of error.0264

So where compounding that problem.0273

The 2nd thing that is wrong when we do a whole bunch of little t-test instead of one giant test is 0277

that we are ignoring some of the data when where calculating the population standard deviation.0284

So what we estimate that population standard deviation the more data of the better but if we 0291

only look at two of the sample that a time then were ignoring the other two perfectly good sets 0297

of data and were not using them in order to help us estimate more accurately the population standard deviation.0303

So we get a poorer estimate of S because we are not using all the data at our disposal.0311

So that is the problem and we need to find a way around it, thankfully, Ronald Fisher comes to the rescue with his test.0338

Okay so the ANOVA that is our general solution to the problem of too many tiny t-test. 0349

But so far we only talked about ANOVAs for independent samples.0356

Now we need an ANOVA for repeated measures so the ANOVA is always going to start the same way with the Omnibus hypothesis.0360

One hypothesis to rule them all and the Omnibus hypothesis almost said all the samples come from the same population.0370

So the first group of photos equals the mu of the second group of photos equals the mu of the 0378

third group of photos equals the mu of the fourth group.0389

And the alternative hypothesis is not that they are not all not equal to each other but that at least one is different, outlier.0392

And so the way we say that is that all mu’s of P, all the mu’s of the different photo type are not the same.0404

Now we have to keep in mind the logic that all of these mu’s are not the same, is not the same as 0421

saying all of the mu’s are different from each other.0430

And when we say all of them are not the same if even one of them is not the same then this alternative hypothesis is true.0433

So this starts off much the same way as independent samples from there we go on to analyze variance.0442

And here were going to use that S statistic again.0450

And Ronald Fisher's big idea that he had upon is this idea that it when we talk about the F it is a 0458

ratio of variances and really one way of thinking about it is the ratio of between sample or group variability over the within sample variability.0469

And another way of thinking about this is if the variability we are interested in and I do not just 0495

mean that over passionate about it or we find a very curious but I really need is the variability 0510

that we are making a hypothesis about over the variability that we cannot explain.0515

We do not know where that vary the other variability comes from, it just exists and we have to deal with it.0523

Okay and so this S statistic is going to be the same concept, the same concept will going to come 0536

up, again we will talk about the repeated measures version of F.0544

There are going to be some subtle differences though.0548

Okay so let us talk about the independent samples ANOVA versus the repeated measures ANOVA.0551

People have the same start, they have the same hypothesis not only that but they both have the 0561

same idea of taking all the variance in our sample and breaking it down into component parts.0567

Now what we talk about all the variance in our sample we really mean what is our sum of squares total.0572

What is the total amount of variability away from the grand mean in our entire data set? 0581

And we can easily just from the sentence we could figure out what the formula for this would be.0590

This should be something like the variability of all every single one of our data point minus the 0597

grand mean which we signify with two bars the double bar square and the Sigma now to do this 0606

for every single data point not just the data point in one sample while the way it knows to do that is because this should say N total.0616

So this is going to go through every single data point in every single sample and subtract get the 0625

distance from the grand mean in the square that distance and add those distances all squared.0634

Okay so that is the same idea to begin with.0643

Now we will take this at this total and break it down into its component parts.0647

Now an independent samples what we see is that all of the variability that were unable to 0651

explain lies within the group, all of the variability that we are very interested in, that is between 0664

the group, and so independent samples the story becomes, as this total is a conglomeration, it is 0671

when you split it up into its part you see that it is made up of the sum of squares within the 0681

group inside of the group and the sum of squares between the groups added up.0690

And because of that the S statistic here becomes the variance between over the variance within 0696

and obviously each of these variances corresponds to its own sum of squares.0711

Now the repeated measures ANOVA were going to be talking about something slightly different because now we have these linked data.0720

So here the data is independent these samples are independent they are not linked to each other in any way.0730

Here, these samples are actually linked to each other.0738

Either by virtue of being made from the same subject or the same class produced or something about these scores are linked to each other.0744

So not only is there variability across the groups just like before so sort of between the groups and variability within the group.0753

But now we have a new kind of variability.0770

We have the variability caused by these different linkages, these all are different from each other but maybe similar across.0774

So the person who owns a digital camera they might just have an enormous number of photos all across the board.0785

The person who does not have a digital camera or not even a smartphone might have a low number of photos across-the-board.0792

So there are those things that we call often are called individual differences.0799

Those are differences that we actually mathematically quantify, we could actually explain where 0806

it is but we are not actually interested in the study, were really interested in the between group difference.0813

But this is not all.0822

Once you have taken out this individual of variability there is still some residual within group variability left over.0825

And so that is that is really stuff we cannot explain, it is not caused by the individual differences 0834

it is not because of between group, it is just within group differences.0842

So in repeated measures the sum of squares total actually breaks down slightly differently even 0847

though it is still it is still this idea of breaking down the sum of squares total now it actually splits 0856

up into some of squares subjects, this individual links the yellow part plus the sum of squares 0865

within just like before that now we call it residual because this we have taken out the sum of the 0878

variability that comes from the individual differences and so because of that there is there is only 0889

left over and so because that we call it residual just like the words left over.0896

And of course the sum of squares between which is what were actually very interested in.0902

So just to recap, this is something that we can explain how there were not interested in, this is 0907

something we cannot explain and this is something we are very interested in.0915

So, our S statistic will actually become our variability between divided by our variability residual 0920

and in fact we just wanted to take this guy out of the equation we want him to out of the equation of F.0931

So F, it does not count the variability from the subjects, were individual difference, the are not interested in that.0940

Okay so I wanted to show you this and a picture here is what I would show you.0953

Here is what we mean by independent samples, remember, the independent samples ANOVA it 0960

is always been a start off with that same idea and total the difference between each data point 0966

from the grand mean squared and then add all of those up that is the total sum of squares.0975

In independent samples, what were going to do is take all of this all of this variability that SS total.0983

That is that SS total of the total variance and we are going to break them up into between group variance.0994

So think of this, this is just to signify the difference of all of these guys from the grand mean.1012

So the between group differences, so SS between and add to that the within group variability.1020

The variability that we have no explanation for.1038

So that is the within group variability.1041

So it only makes sense that the variability between divided by the variability within, this is what 1046

we would use in order to figure out the ratio of the variability we are interested in or 1067

hypothesizing about divided by the variability we cannot account for.1077

So this becomes the S statistics that were very much interested in.1090

Now when we talk about the repeated measures ANOVA, once again we start off similarly for 1096

every single data point we want their squared distance away from the grand mean and add them all up.1106

In order to see this as a picture you want to see that whole this whole idea here that did the 1113

distance of all of these away from the grand mean that is SS total.1123

However what we wanted to do is then break it up into its component parts and just like before 1129

we have these differences between the groups so that is SS between.1138

And that SS between is the stuff that were really interested in so that is also going to be a factor here.1147

Where we take the variability between but then, we want to break up the rest of the variance 1156

into one part that we can actually explain, we could account for it and into the rest of the residuals that we cannot explain.1165

So even when we are not interested in that we could actually account for the variability.1174

You could think of it across these rows because notice that person one, the viewer photos 1182

across-the-board, person three just has more photos across-the-board and so those are the kinds 1194

of individual differences, little level differences that we do not actually want in our S statistic.1204

Its variability we know where it comes from which is not interested in it in terms of our hypothesis testing.1211

So we have this SS subject, put a little yellow highlight here so that you know what it stands for 1217

and that is the variability that we can explain but not part of my hypothesis testing.1232

And so what variability are we left with, we are left with any leftover variability, there is some 1240

leftover variability and we call that residual variability and that is going to be SS residual.1248

And if we want to look at the variability that were interested in over the variability we cannot 1257

explain, we are not going to include this variability, we are only going to use this one.1264

So the variability between groups divided by the variability residual, residual variability.1269

Once we have that now let us break it down even further.1281

So the repeated measures S statistic now you sort of know basically what it is.1289

It is the is the variability between groups divided by the variability within group.1294

Now we could break these up into their component parts so it is going to be the SS between 1307

sum of squares between, divided by the degrees of freedom between, all divided by the sum of 1316

squares of partnership residual, residual over the degrees of freedom residual.1327

So far it just looks like what we have always been doing with variability sum of squares through 1338

the freedom but now we have to figure out okay how do we actually find these things.1348

And in fact, this is something you already know because this is actually exactly the same as the independent samples ANOVA.1355

The only thing that can really be different is this one.1371

Okay so let us start off here.1387

So this is what we are really looking for when we start double-click on this guy we double-click 1390

on that variability what we find inside is something like this and then double-click on each of these things and figure out what is inside.1398

So conceptually this is idea the whole idea of the variability between groups is the difference of 1405

sample mean from the grand mean because we want to know how each sample differs from that grand mean.1413

Now let us think about how many means we have because that is going to determine our degrees of freedom.1419

So how many means we have is usually K.1426

How many samples we have and so with the degrees of freedom between well it is going to be K -1.1430

And the way you can think about this is how many means we find, where you find three means 1441

or how many means as groups so if we have four groups it would be for means if we had three groups it would be three means.1448

And if we knew two of them and we know the grand mean, we could actually figure out third and 1456

so because of that is going our degrees of freedom is K – 1, the number means -1.1465

Okay so what is sum of squares between? 1471

Well it is this whole idea of the difference of sample means from the grand mean and we could 1479

say that the sample mean away from the grand mean we have a whole bunch of sample means.1484

Something to put my index there.1489

And we are going to square that because the study some of squares, and each means distance 1491

should count more if that sample has a lot of members and I should get more votes so we are 1498

going to multiply that by N sub I, how many in their sample.1504

And in order to figure out what I mean when I say okay let us think about that, I is going to stand 1511

for each group so this Sigma is going to have to go from I = 1 through K how many groups.1518

And then it is going to cycle through group 1, group 2, group 3, group 4 and this is some of squares between.1527

Additional is very similar because this is actually the same thing from independent samples ANOVA.1536

So now we have to figure out how to find the other sum of squares the new one.1544

Sum of squares residual and degrees of freedom residual and the whole reason we want to do 1551

that is because we want to find the variability residual, the leftover variability.1555

If any leftover spread within the groups is not accounted for by within subject variation.1562

Now within subject might mean within each person right but it might mean within each hamster 1570

or each company that is being measured here repeatedly so whatever it is that your case is 1578

whether animal or human or entity of some sort , that is considered within your subject of variability.1585

And those subjects are all slightly different from each other.1595

But that is not something that were actually interested in so we want to take that out and take the leftover variability.1598

And because it is the idea of leftover, we actually cant find a lot of this, we can find some of 1604

squares directly, we have to find the leftover.1614

And so the way we do that is take the total sum of squares and then subtract out the stuff we do 1617

not need which is namely the sum of squares between as well as the sum of squares within subject to the variability within subject.1624

And so here we see that we are going to have to find some of squares for everybody were 1637

enough to find total as well as for the within subject, we already knew we have to find this one, 1642

and that is how where we can find some of squares residual, literally whatever is left over.1650

In the same way to find degrees of freedom residual, we should have to know something about 1656

the other degrees of freedom in order to find this sort of our whatever's left.1661

And so in order to find degrees of freedom residual, what we do is we multiply together the 1667

degrees of freedom between times the degrees of freedom within subject and when we do this, 1674

we are going to be able to find all the degrees of freedom that is leftover .1683

Okay so we realize in order to find sum of squares residual we have to find all these other sum of squares so here is some of squares within subject.1689

So the way to sort of think about this notion is this idea that were really talking about subject level or case level, subject level variation.1702

So each case differs a little better from the other cases for God knows what reason right but we 1716

can actually account for it here, it is not totally unexplained we do not know why it exists, we 1723

know it exists because the subjects are all slightly different from each other and we do not know 1729

why it exists but we know what it is and we could calculate it.1734

Okay so conceptually you want to think about this at how far each subjects mean is away from the grand mean.1738

Remember in repeated measures we are repeatedly measuring each subject or case, we are 1748

measuring them multiple times so if I am Facebook user I will be contributing for different scores to this problem.1754

Now I know what you could do is get a little mean just for me right? 1763

The little mean of my four scores and that is my subject mean.1770

So each subject has her own little mean and we want to find the distance of those little means away from the grand mean.1774

So let us think how many subject means do we have? 1783

We have N number of subjects, that we have an number of samples for each for each sample and number of measures for each sample.1787

So that is our sample size.1800

So what is degrees of freedom for within subjects? 1803

Well, that is going to be N-1.1806

So what is the sum of squares for each subject? 1808

Well one of the things you have to do is sort of figure out a way to talk about the subject level mean.1814

So here, I am just going to say mean and put a index for now but here in my in my little telling that 1820

this Sigma will tell this what I is, I will go from one up to N sample size.1835

This is really the subject means and I want to get the distance squared distance from each 1843

subject mean to the grand mean and square that, squared distance and we should also take into 1853

account how many times is the subject being measured and that is going to be K number of times.1860

How many how many samples are taken how many measures are taken so repeated measures how many times the measure is repeated.1867

And the more times a subject participates the more this variation will count.1878

So there we have subject level variation, we are really only finding it so that we can find SS 1889

residual, so better do it and so we also have to find sum of squares total and degrees of freedom total.1899

These are something we have gone over that just to drive it home remember the reason we 1908

want to find this is just so we can find sum of squares residual.1913

So conceptually this is just the total variation of all the data points away from the grand mean.1916

What is the total number of data points? 1922

That is going to be N total.1925

So every single data point counted up and the way we find that is sample size N times the 1928

number of samples we have so if we had 30 people participating in four different measures, it is 1937

30 times 4 and the number of samples is called K so NK of N subtotal.1944

So what is the degrees of freedom total? 1954

Well it is either going to be N total minus 1 or the same exact numerical value will be NK -1 either way.1957

And so what is the sum of squares total? 1967

Well we have already been through it this is what we always start off with at least conceptually 1970

for every single data point notice that there is no bars on it not any means of literally every single data point.1975

The distance from the grand mean squared and we could put NK here just to say go and do this 1984

for every single data point do not leave one behind.1994

So we have all of our different components, now let us put them together in this chart so that you will know how they fit together.1997

Remember the idea of the F is the variation we are interested in over variation we cannot 2009

explain, we cannot account for do not know where it comes from, it is a mystery.2027

The formula for this is going to be F equals, I remember this is for repeated measures so that 2033

between sample variability over the residual variability.2043

And in order to find that we are going to need between sample variability.2051

The idea is always going to be the best sample means difference from grand mean.2057

So basically the centres of each sample away the distance away from the grand mean and so the 2072

formula for that is going to be S squared between equals SS between over the DF between and 2084

we can find each of those component parts SS between going to be the sum to many zigzag, the 2094

sum of all of my all of my X bars minus the grand mean the distance and when I say all I mean one at a time.2103

Each as individuals one at a time, and this distance should count more if you have more people 2118

or more data point in your sample and I does not go from one to N, it goes from one to K, I am 2125

going to do this for each sample, NK is my number of samples or number of groups.2134

So my degrees of freedom between is really going to be K -1, number groups -1.2139

Okay so now let us try to get residual variability.2148

Now residual variability is that leftover within groups within sample variability, now in order to 2155

get leftover the formula for this is going to be the variability residual.2169

Now to get that you get the residual sum of squares and divide by the residual degrees of 2181

freedom, the residual sum of squares is literally going to be the left over.2190

SS total minus SS subject plus SS between.2197

And my degrees of freedom residual is going to be a conglomeration of other degrees of 2209

freedom available to S times the degrees of freedom between, okay.2222

So we know that in order to find these, so the total variability let us start there, we know this one 2230

pretty well, all the data point in all of our samples away from the granting and so we actually do 2242

not need the variability here and we do not need this variability either.2269

What we really need is the sum of squares total and that is going to be for each data point no X 2274

bar anything get this squared distance from the grand mean.2283

So now that we have that we do not really need that but we can find it anyway so the degrees of 2288

freedom total is going to be NK -1 so the total number of data points minus 1.2299

Now let us talk about within subject variability this is the Brad of each case away from grand 2308

mean and when you talk about each case, each case can sort of be represented the point 2324

estimate of it can be its own means so each cases mean so that is how I want you to think of it.2331

Each case is represented by its own little mean and so that is why they were using it means to calculate the distance.2337

So that SS subject is going to be the distance of each subject level mean away from the grand 2345

mean squared and in order to say subject level, you got to put that N here so that it knows do 2360

this for each subject not do this for each data point or do this for each, if we put a K there would 2368

be do this for each group and we wanted to count more if they participate in more measures so if the measures are repeated over and over again.2386

So we want to put in the number of k and so that gives us our are some of squares for each 2387

subject and once we have those two we can find this as well as some of squares between and 2394

then we also need the degrees of freedom for within subjects just because were in need that to find out the degrees of freedom residual.2400

This guy do all this jump through hoops.2411

So the degree of freedom for each subject for subject level variance is going to be N -1, the number of subjects -1.2414

Okay so here is example 1 which is more prevalent uploaded mobile profile photos and so these 2423

are all different kinds of photos but one person or one Facebook user presumably 1 person, they 2434

are sort of the linking factor of all four of those measures.2441

So what is the null hypothesis? 2448

Well it is that all of these groups really come from the same population.2451

The reason I use this P notation is for just different types of photos and I will call this one 2, 3, and 4.2457

Also it makes it easier for me to write my alternative hypothesis, it has a practical significance so all use of P’s are not equal so they are not all equal.2472

So the significance level we could just set it as alpha equals .05 just by convention because we 2494

are going to be using F value, we do not have to determine whether the one tailed or two-tailed.2513

Always one tailed is cut off on one side and skewed to the rights it is always going to be just on 2517

the positive side and so let us draw our decision stage with the jar of distribution.2528

We know that it ends at zero is alpha equals .05, what is the F here? 2535

Well remember, in order to find as we need to know the denominators DF as well as this numerators DS.2548

And so here we know that F in the numerator is going to be degrees of freedom between group and that is K -1.2557

There is 4 groups so it is going to be 3 and the degrees of freedom of residual is going to be degrees of freedom between times degrees of freedom subject.2570

So we are going to need to find degrees of freedom subject and degrees of freedom subject is going to be N-1.2587

Now let us look at our data set in order to figure out how many we have in our sample.2597

So I have made it nice and pretty here, first type photos mobile uploads uploaded photos and 2602

profile photos, as you look at this row it has all of the data from one subject so this 2609

person has zero photos of any kind whereas let us look at this person.2619

This person has zero mobile uploads and zero profile photos that they have 79 uploaded photos 2625

and 37 tag photos and so for each subject we can see that there is some variation there but 2631

across the different samples we also see some variation.2639

So here down here I put step one they are all equal and are not all equal, equals .05, here is the 2643

decision stage, our K is 4 groups 4 samples, our degrees of freedom between us 4 -1, we already 2655

done that but might to fill this in, on degrees of freedom for subject the reason why this is there 2663

is so that we can find the degrees of freedom residual and once we have that then we can find our critical F.2670

So the degrees of freedom for each subject we should count how many subjects we actually have 2692

here, we could just count the rows, so I just picked profile photos -1 so we actually have 29, 29 2707

cases but our degrees of freedom for subject is 28.2719

Now degrees of freedom residual are those 2° of freedom multiplied to each other so 3×28 and 2724

that is going to be 84 and that is our denominator degrees of freedom.2735

So now we can find our critical F.2740

In order to do that we use F inverse the probability is .05 and our first degrees of freedom is the 2742

numerator one and our second degrees of freedom is the denominator and our critical F is 2.71, that is our critical F.2750

So once we have that we can now go on to sort of figure out, okay from there let us go on and calculate our sample tests.2767

So we will have to find the sample S statistic right before, I disputed generically because you 2777

might have to find T statistic or the statistics but in that case we know because we have a 2787

omnivorous hypothesis we need that S statistic and we have to find the P value afterwards so let us find the S statistics.2794

Go to your example again, this is example 1, let us put in all the different things you need.2803

So you need the variance between over the variance of the residual variance so let us start off 2814

with variance between, it is something we already know, we know it is been split up into sum of 2822

squares between and degrees of freedom between.2827

We actually have degrees of freedom between already so let us just fill that in, in order to find 2830

the sum of squares between, you have to find the means for each of these groups.2834

We are also going to need to find out what is the N.2841

That is actually quite simple because we know that it is 29 for each of these groups so that makes life a little bit simpler.2845

Now let us find the averages for each of these samples so for the first sample I believe this is tag photos, the mean is 9.93, 2861

I believe this is mobile uploads, that is 12.45 for uploaded photos, that averages 68 and finally for profile photos the average is 1.5.2874

Okay so now we going to have to calculate the grand mean.2905

The grand mean is quite easy to do on XL because you just take all your data points every single one and you calculate that average.2909

The average is 23.2919

I am just going to copy and paste that here, what I did was they put a point here so that it would 2921

just point to that top value for the granting shouldn't change the granting is always the same.2929

Now that we have all of these values we could find N times XR minus the grand mean squared.2935

We could find that for each group, and then when we add that up, we end up getting our sum of 2948

squares between and we get this joint number 82,700.2968

And so I am just going to put a pointer = point to that guy and then I am going to find out my variance between.2973

So my variance is still quite large about 27,600.2986

Okay so to have that now we need to find my variance of my residual variance.2993

In order to find residual variance, I know I am going to need to find all this other stuff that I did not necessarily plan on.3000

So one of the things I do need to find is my SS total as well as my SS subject.3007

I am going to start with SS total because although the idea is simple to on XL it looks a little crazy 3014

just because it takes up a lot of space because we going to need to find this square distance 3023

away from the grand mean for every single data point.3028

So here, all my data points are here.3033

Now I am going to need to find the square distance of this guy away from the grand mean, and then add them all up.3040

What is helpful in XL is to create separate rows and then to sort of add them up and so I am just 3055

going to use save these for later, and so this is, I have already put in the formulas here, this 1 is 3067

tag for the tag photos, it is sort of my partial way to find SS total just for the tag photos and I am 3075

going to do it for the mobile photos and for the uploaded photos then for profile photos and then add them altogether .3085

So either sort of subtotal.3091

So what I need to find is that data points minus the grand mean and I will just use this grand mean that I found down here.3094

But what I need to do is I need to lock that down I need to say always use this grand mean do not use any other one.3105

You put that in parentheses so that I could square it.3113

So here I am going to do that all the way down for tag photos and just take this across for mobile 3118

uploaded and profile photos and that is the nice thing about XL it will give you all of these values very very easily.3133

I am just going to shortness this for second, just to show you what each of these is talking about.3144

So click on this one, this cell gives me this value minus my grand mean which is locked down 3152

squared so I have now and every single data points square distance away from the grand mean and these are all the differences square distance.3160

Now I need to add them all up.3171

So put sum, and I am not just going to add up this column I am literally going to add all this up.3174

So our total sum of squares is 257,000.3184

So I am going to go down to my sum of squares total and just put a pointer here and say that is it.3192

So how do I find my sum of squares for the subject level variation? 3201

Well, this I know I need to find the mean for every subject then I need to find the distance 3212

between that mean and the grand mean square that and multiply it by how many groups I have.3219

The nice thing is the number of groups I have this constant is always four for everybody so let us go ahead and find subject level mean.3226

So subject means are going to be found by averaging one person measures for all 4 sample and 3235

so that guys average of zero, just copy and paste that down, if they wanted to check this one takes the average of these four measures.3248

So this is subject level variation and this shows you that this guy has a lot fewer photos period than this guy.3259

He has just an average a lot higher photos than this guy.3268

And this guy is sort of in the middle of those two.3273

Once we have these subject level means now we could find this idea K times the difference 3276

squared for each subject so I know my K is going to be 4 times my subject level mean minus the 3286

grand mean and I will just use my already calculated grand mean down here and I need to lock 3302

that grand mean down because that grand mean is never going to change squared.3309

Once I have that then I probably want to add them all up in order to get my sum of squares for within subject variation.3316

I will just put this little sum signs so that I know that this is an another like data point, it is a 3335

totally different thing, sum, and once I have that it is 56,600 and I know my sum of squares within subject.3345

Once I knew all those things now I can finally calculate some of squares residual because I know my ingredients.3360

I have my sum of squares total minus the sum of squares per subject plus the sum of squares 3369

between and I could obviously distribute out that negative sign but I will just use the parentheses.3380

So here is my leftover sum of squares that whatever's leftover unaccounted for and I already 3390

figured out my DF residual and so here I am going to put my sum of squares residual divided by 3399

degrees of freedom residual and there get 1400.3410

So now we can finally finally calculate our F by taking the variance between and dividing that by the variance residual variance.3416

In there I get 19.69 which is quite a bit above the critical F of 2.7.3427

Now once I have that now I could find my P value.3435

So by May P value I would put in my F, put in my F value, my numerator degrees of freedom as 3441

well as my denominator degrees of freedom and I get 9.3×10 to the negative 10 Power so that 3456

means there is a lot of decimal places before you get to that 9 so it is very very very very small P value.3468

So what do we do? 3475

We reject the null.3478

Also remember that in a F test , all we do is reject the Omnibus null hypothesis that does not 3480

mean we know which groups are actually different from each other so when you do reject the 3490

null after doing F test, you want to follow up and do post hoc test.3495

There is lots of different post hoc test you might learn to keep postop or Bonferroni corrections 3500

so those all help us know the pairwise comparisons to figure out which means are actually 3507

different from which other means and you probably also want to find effect size and in F test if 3515

effect size is not D or G instead its Eta squared.3524

So we would reject a null.3527

Example 2, a weightless boot camp is trying out three different exercise programs to help their clients shed some extra pounds.3537

All participants are assigned to team up 4 people and each week their entire team is weight 3546

together to see how many pounds they were able to take off.3552

The data shows their weekly weight loss as a team.3554

With a exercise program all equally effective in helping them lose weight note that all teams 3558

tried all three exercise regime but they all receive the treatment in random order.3564

So this is definitely a case where we have three different treatments.3569

Treatment 1, 2 and 3 and we have data points which are going to be pounds lost.3574

How many pounds they were able to take off per week pounds loss per week but these are not independent samples.3581

They are actually linked to each other.3590

What's the link? 3592

It is the team of four that lost that weight right so this team lost that much under this exercise 3594

regime, lost that much under this exercise regime, lost that much under this exercise regime.3602

Now each team got these three exercise regimes in a different order.3608

Some people are 3, 2, 1, so they have all been balanced in that way so if you pull up your examples and good example 2, you will see this data set.3612

So here are the different teams or squads.3627

Here are the three different types of exercise program and in the different orders that they 3630

were, that they did these exercises and each exercise was done for a week.3635

So let us think about this.3642

So to begin with we need a hypothesis so step one is the null hypothesis and all are equal.3644

So all the mutinies, exercise 1, exercise 2, exercise 3, they are all equal.3660

The alternative hypothesis is that not all are equal.3667

So step 2 is our significance level we could just set alpha equals to .05 once again because it on 3674

the best hypothesis we know we are going to do a F test so it does not need to be two-tailed.3686

So step three this is the decision stage if user imagine that F distribution or color in that part, 3692

what is that critical F? 3703

Well, in order to find the critical F, we are going to need to find the DF between as well as the DF 3706

residual because that is the numerator and the denominator degrees of freedom.3715

In order to find DF residual we also need to find DF subject and remember here subject does not 3722

mean each individual person, subject really mean case.3730

And each case here is a squad.3733

So how many squads are there -1.3736

So count how many squads there are -1.3741

So there is 11° of freedom or subject.3746

For degrees of freedom between what were going to need is the number of different samples which is three okay -1 so 3 - 1 is 2.3760

And so my DF residual is the DF between times the DF subject and that is 22 so let us find the critical F.3774

We need F inverse, the probability that we need is .05, the degrees of freedom for the 3784

numerator is 2, the degrees of freedom for the denominator is 22 and our critical F is 3.44.3792

Step 4, here we are going to need the F statistic and in order to find F, we need the variance between divided by the variance residual.3802

In order to find variance between we are going to need the SS between divided by DF between, 3823

we already have DF between thankfully, so we do need SS between.3833

And the concept of SS between is the whole idea of each samples X bar, their distant away from 3838

the grand mean squared and then depending on how many subjects you had in your sample how 3849

many data points you had in your sample you get waited more or less.3856

Now the nice thing is all of these have the same number of subjects.3860

But let us go ahead and and try to do this.3864

So first we need the different samples so this exercise 1, exercise 2, exercise 3, we need their N, 3869

their N is going to be 12, there is 12 data points in each sample.3879

We also need there each exercise regimes average weight loss so we need X bar and we also 3888

need the grand mean because ultimately we are going to look for N times X bar minus the grand 3901

mean squared in order to add all of those up.3908

So let us find X bars for exercise regime number 1.3912

So that an XL makes it nice and easy for us to just find out all those averages very quickly and 3918

then once we have that, now we can find the grand mean.3931

The grand mean is also very easy to find here.3938

We just want to select all the data points.3941

I think I selected one of them twice, be careful about that.3944

So make sure everybody is selected just one time so this is the average weight loss per week 3950

regardless of which team you were on regardless of which exercise you did.3960

And now let us find N times the X bar minus the grand mean squared and let us do that for each for each exercise regime.3965

Once we have that done we could find the sum, and the sum is 23.63.3983

So here in SS between I would put that number there.3997

So once we have that now we could actually find this because we already have calculated the DF between, was not too hard.4006

Now we have to work on variance residual, now in order to find variance residual, let me just add 4018

in a couple of rows here just to give me a little more space, variance residual, in order to find 4031

variance residual I am going to need to find SS residual divided by DF residual.4049

We already have DF residual so we just need to find SS residual, in order to find that I need SS 4054

total minus SS between + SS subject level.4062

So I already have my SS between so I need to find SS total and SS for each subject.4071

So SS total is going to be for every single exercise regime, for every single one of these data 4080

points I need to find that distance away from the grand mean, add them all up and square and that is going to be my SS total.4092

So for E1 here is my subtotal for SS total, for E2, my subtotal for SS total, for E3, my subtotal for SS total.4104

So that is X minus the grand mean, lock that grand mean down, squared and make sure you do 4120

that for every single data point in E1 so if I check on that last data point and just go ahead and 4141

copy and paste that although it have to hear let us just checked on this one, this is taking this 4151

value, subtracting the grand mean from it and then squaring that distance.4157

So once I have this, I could sum them all up and get my SS total, my total sum of squared distances.4164

So I am just going to put a pointer here so that I do not have to rewrite the number.4180

Once I have that all I have to find the SS subject.4187

Now remember, the SS subject each subject has its own little mean could be repeatedly make 4190

the measure right so we have to find the subjects mean and then we have to get the distance 4195

between their mean and the grand mean, square that and multiply it by the number of measures, K.4201

So let us do that here, first we need to find the subjects X bars so that is going to be each squads 4211

average weight loss so some squads probably lost more weight than others, so this is the average 4226

weight loss for each squad so it looks like you know this squad loss a bit, so a little bit of variation 4242

in subjects success and sure we are going to look at K times the subjects X bar minus the grand 4259

mean squared so we already know K, K is going to be 3 times the subjects X bar minus the grand 4272

mean, I am just going to use the one we have already calculated down here and of course lock that down so copy and paste this, squared.4284

So copy and paste that all the way down and I could find the summary here and this is going to be my sum of squares for subject.4298

That is the sum of the bunch of squares.4312

So that is 34 something.4318

I am just going to put a pointer there so I do not have to retype that but I could just see it nice and clearly right here.4321

So now you have everything I need in order to find SS residual so I need SS total minus my sum of squares between plus some of squares subject.4332

Once I have that now I could find my vary residual variance divided by degrees of freedom, okay 4344

so here it looks like my residual variance is much smaller than my between sample variance and 4356

so I could predict my F value will be pretty big so 11 point something divided by two point 4372

something and that gives me 5.219 and that is a little bit bigger than my critical F.4381

So if I find my key value F disc and put in my F, my numerator degrees of freedom, my 4391

denominator degrees of freedom, I would find .01 so that seems like a pretty small, smaller than 4403

.05 so I am going to be rejecting my null.4414

So step five down here, reject the null.4418

And we know that once you reject the null you are going to need to also do post hoc tests as well as find it a square.4425

So that brings us to example 3 what is the problem with a bunch of tiny t-test? 4432

Well with so many t-test the probability of type 1 error increases increasing the cut off A, actually 4445

were not increasing the cut off, we are keeping it at .05 but the type 1 error increases because 4458

we reject we have the possibility of rejecting the null multiple times.4466

With so many t-test the probability of type 1 error increases here it is because we may be rejecting more null hypothesis.4470

This is actually a correct answer so we might not be done yet.4478

With so many paired samples t-test we have a better estimate of S because we have been estimating S several times.4484

With so many paired samples t-test we have a poor estimate of S because were not using all of 4491

the data to estimate one S in fact we are just using substance of the data to estimate S several times, that is a good answer.4500

So that is it for repeated measures ANOVA, thanks for using educator.com.4508

Hi, welcome to educator.com. 0000

We are going to talk about the chi-square goodness of fit test. 0002

So first, we are going to start with the bigger review of where the chi-square test actually fits in. 0005

Amongst all the different inferential statistics we have been learning so far and then we are going to talk 0012

about a new kind of hypothesis testing, the goodness of fit hypothesis test. 0018

So it is going to be similar to hypothesis testing as we been doing so far but there is a slightly different logic behind it.0023

So because it is a slightly different logic there is a new all hypothesis as well as the alternative hypothesis. 0029

Then we are going to introduce the chi-square distribution and the chi-square statistic. 0037

And then we are going to talk about the conditions for chi-square test when do we actually do it. 0044

So where does the chi-square test belong? 0049

And it is been a while since we have looked at this if you are going in order with the videos but I think it is0054

pretty good to stop right now and sort of think where we come from? 0059

Where are we now? 0063

So the first thing we want to think about are the different independent variables that we been able to look at. 0065

We been able to look at independent variables the predictor variables that are either categorical or continuous. 0072

When the idea is categorical you have groups right? 0084

Or different samples, right? 0095

When the idea is continuous you do not have different groups you have a different levels that predict something. 0098

So just to give you a idea of a categorical IV that would be something like experimental group versus the 0107

control group or something like this categorical IV may be someone who gets a drug versus someone who 0116

gets the placebo , a group that gets the drivers of the group that gets the placebo and example of the 0127

continuous IV might be looking at how much you study predicting your score on a test , so how much you 0132

study would be a continuous IV. 0140

So that is one of the dimensions that we need to know, is your IV categorical or continuous. 0143

You also need to know whether the DV is categorical or continuous so the DV is the thing that were 0150

interested in measuring at the end of the day the things that we want to know that this thing change this is0160

the thing we want to predict right, and so far here is how would come. 0167

At the very beginning we looked at continuous types of tests and those types of measures and those were 0177

the regression, linear regression, as well as correlation. 0187

Remember R and regression was that stuff about like Y equals the not + b sub 1 times X, so that was 0193

regression and correlation way back in the day. 0210

We have been covering a lot of this quadrant actually looking at t-tests and ANOVA right?0215

One important thing to know that t-tests and ANOVAs are both hypothesis tests, only so far have not 0224

learned hypothesis testing with regression and correlation. 0238

A lot of inferential statistics in college does not cover hypothesis testing of regression until you get to more advance levels of statistics. 0241

So what do ANOVAs and t-tests sort of have in common? 0255

Well they have in common that they are both categorical IV and continuous DV. 0261

The IV is categorical and you only have one, one IV. 0269

And your DV is continuous. 0277

So that sort of what they have in common, what is different about them? 0282

Well the difference is that the IV in t-tests has two levels in only two levels so there is only two groups or two samples. 0287

In ANOVAs we could test for more than two samples, we can do that for 3 4 5 samples. 0297

So that IV has greater than two levels and so that is where we been spending a lot of our time. 0302

So for the most part continuous DV are really important because they tell us a lot, they tell us the find ways0312

that we could actually be different, that the data could actually be different.0320

So you are going to, it is more rare that you will use the categorical dependent variable, that is not going to0327

be as informative to us but it is still possible and that is where the chi-square is going to come in. 0334

The chi-square is been coming right in this quadrant where we have categorical IV also a categorical DV so 0340

for instance we might want to see something like if you are given a particular job or the placebo, do you 0347

feel like you are getting better, yes or no right? 0357

So that is a categorical DV, it is not like the score that we can find a mean and so this is where the chi-square tests come in. 0360

And there is going to be 2 chi-square tests that we are going to look at. 0375

The first one, we are going to cover today and it is called goodness of fit. 0379

The next one is in the next lesson and it is called a test of homogeneity. 0382

They are both chi-square test. 0386

The other way you will see that what is written is chi-squared, so sometimes, do not think of, oh what is this doing here? 0387

When it has this little curvy part here we need chi-square, the Greek letter chi, finally this is a test that0398

rarely is covered in inferential statistics but at more advanced levels of statistics he did cover it and it is called 0407

the logistic test and logistic test takes you from continuous IV to categorical DV. 0415

But that is rare design used in conducting science, it is not as informative as continuous to continuous or categorical to continues. 0424

Alright so we are going to spend your time right in here. 0436

So there is a new twist on hypothesis testing, it is not totally different, it is still very similar but there is there is a subtle difference. 0441

Today we are going to start off with the chi-square goodness of fit test. 0454

Basically let us think about hypothesis testing in general. 0457

In general you want to determine whether a sample is very different from expected results that is the big idea of hypothesis testing 0462

and expected results come from your hypothesized population.0470

If your sample is very different than we usually determine that with some sort of test statistic and looking0474

at how far it is on the on the tested statistics distribution right and we look at whether it is past that Alpha0481

cut off or the critical test statistic right and then we say, oh this sample is so different than would be 0489

expected given that the null hypothesis is true that we are going to reject the null hypothesis. 0496

That is usually hypothesis testing. It still takes that idea whether to look at whether a sample is very 0504

different from expected results, but the question is how are we going to compare these two things? 0511

We are not going to compare means anymore, we are not going to look at the distance between means, 0517

nor are we going to look at the proportion of variances that is not what we are going to look at either. 0521

Instead we are going to determine whether the sample proportions for some category are very different 0527

from the hypothesized population proportion. 0539

And the question will be how do we determine very different and here is what I mean by determine 0542

whether the sample proportions are different from the hypothesized population proportion.0549

So here I am just going to draw for you sort of schematically what the hypothesized population proportions might look like. 0554

So this is just sort about the idea, so you might think of the population as being like this and in the 0569

population you might see a proportion of one third being blue, one third being red, and one third being yellow. 0577

Now already it is hard to think about like you could already sort of see, well we cannot get the average of0588

blue red and yellow right like what would be the average of that, and how would you find the variability of 0597

that so already we are starting to see why you cannot use t-tests or ANOVAs if you cannot find the mean or 0605

variance you cannot use those test so is this is what our hypothesized population looks like and when we 0613

get a sample we get a little sample from that population, we want to know whether our sample 0622

proportions are very different from the hypothesized proportions or not, so let us say in our sample 0631

proportion we get mostly blue, little bit of red, little bit of yellow so let say 60% blue 20% red 20% yellow. 0637

Are those proportions different enough from our hypothesized proportion?0650

Another sample we might get is you know, half blue and half red and no yellow, is that really different from our hypothesized proportion? 0655

Another sample we might get might be only like 110 blue and then 40% red and then the other half will be yellow. 0674

So something like that we want to say if it is really different from these hypothesized population 0694

proportion, and so that is what our new our new goal is. 0700

How different are these proportions from these proportion and then the question becomes okay how to 0706

determine whether something is very different? 0713

Is this very different or just different? 0717

How do we determine very different, that is going to be the key question here. 0724

And that is why we are going to need the chi-square statistic and the chi-square distribution. 0728

So we are changing our hypotheses a little bit now the null hypotheses is really about proportion and here is what we are talking about. 0733

The null hypothesis now is that the proportions of the population are real population that we do not know? 0749

Will this population be like the predicted or theorized proportion and so here we are asking is this unknown0756

population like or known population right and it should sound familiar as that sort of the fundamental basis of inferential statistics. 0772

So that is our new null hypothesis. 0782

That the proportions in the population are like the predicted will be like the predicted population proportion still be the same. 0785

Remember sameness is always the hallmark of the null hypothesis alternatively if you want to say at least 0798

one of the proportion in the population will be different than predicted so going back to our example, if our0807

population are hypothesized population is something like one third, one third, one third maybe what we 0816

will find is something like in our sample will have one third blue but then some smaller proportion like 15% red and on the rest being yellow. 0830

Now the one third should match up. 0856

The one third matches up but what about these other two? 0860

And so an alternative hypothesis at least one proportion in the population will be different from the predicted proportion, 0864

there just has to be one guy that is different. 0875

Suggest I give you an example, let us turn this problem into a null hypothesis in an alternative hypothesis.0878

So here it said according to early polls candidate A was supposed to win 63% of the votes and candidate B was supposed to win 37%. 0886

When the votes are counted candidate a won 340 votes while B won 166 votes so here just to give you that 0898

picture again the null hypothesis population was that candidate A color A in blue, candidate A should have 0908

won 63% of the vote and candidate B all color in red should have won 37% of the vote so what would be our null hypothesis? 0918

Our null hypothesis would be that our unknown population will be like this predicted the proportions of my unknown population0933

will have the same proportion as our predicted population. 0945

So here we might see something like A's proportion of votes of the actual real votes should be like this, 0949

the predicted population, and B’s proportion of votes should be like predicted population. 0982

So let us say, A’s proportion the real proportion of votes should be like this, and so should B, B should be like this. 1009

The other way we could say that is that the proportion of votes the real proportion of votes should be like 1017

the predicted proportion of votes, and then you could just say for every single category for both A and B. 1025

So what would be the alternative version of this? 1031

The alternative would say at least one of the proportion one of the categories either A or B one of those 1035

proportions will be different from the hypothesized proportion. 1043

And in fact in this example if one of them is different the other will be different to because since we only1048

have two categories if we make one really different than the other one will automatically change. 1056

But later on we might see example 3, 4, 5 category and so in those cases this will make more sense. 1061

Okay so now let us talk about how to actually find out if out proportions are really off or not. 1070

Are our proportion statistical outliers are they deviant, are they significant, do they stand out, that is what we want to know. 1080

And in order to do that we have to use measure called the chi-square statistic instead of the T statistic 1092

which looks at a distance away in terms of standard error instead of the S statistic which looks at the 1099

proportion of the variance are interested in over the variance we cannot explain the chi-square does something different. 1106

It is now looking at expected values what would we expect and what would we actually observe and so the 1113

chi-square is going to look like this, so be careful that you do not, usually it is like a uppercase accident and1124

it is a little bit different than like a regular letter X, it is usually a little more curvy to let you know it is chi-square. 1134

So the chi-square is really going to be interested in the difference between what we observe the actual 1142

observed frequency or percentages minus the expected frequency. 1150

So what were looking at observed versus expected this is what we see in our sample and this is what we 1157

would predict given our hypothesized population so this is that predicted population part. 1170

So were interested in the difference between those two frequencies. 1180

Now although you could use proportions as well you can only do that if you have the same, if you have a 1185

constant number of items so you probably are safer to go with frequencies because those are assertively 1200

weeded proportion so you probably want to go with that. 1203

So were interested in this difference but remember when we look at this different sometimes there can be 1207

positive sometimes there can be negative and so we what we do here as is usual in statistics as we square 1214

the whole thing, but we also want to know about this difference as a proportion of what was expected and we want to do this for every category. 1220

For the number of categories and I goes from one to the number of categories and there is actually an I down here for everything. 1234

So what this is saying is that for each category, each proportion that you are looking at so in our in our sort1249

of toy example with the red blue and yellow, in this example we would do this for blue we would do this 1259

for red and we would do this for yellow so number of categories, so categories really speak to what are the proportions made of? 1275

So in here we have three categories so we would do this three times and add those proportions up and we 1291

want to eventually be able to find observed frequency and the expected frequency. 1315

Now in the example that we saw with the voting of for candidate A and B, one of the things I hope you 1321

noticed was that the observed frequencies were given is just number of votes how many people voted but 1330

the expected frequencies would be expected hypothesized population, that was given as a percentage so 1336

you cannot subtract votes from percentage, you have to translate them both into something that is the 1346

same and so in that it is helpful to change the expected percentages into expected frequency and there is 1353

going to be another reason for changing it into expected frequencies instead of changing the observed 1366

frequencies into the observed proportion and I am going to that a little bit later. 1371

So here is what I want you to think of this, is really the square difference between observed and expected 1377

frequencies as a proportion of expected frequency and you want to do that and you want to sum that over all the categories. 1384

Once you have that then you get your chi-square value, now let us think about this chi-square value. 1394

If this difference is very large right so observed frequencies are just very different than expected one, is that difference is very large? 1400

You are going to have a very large chi-square also if this difference is very small, they are really close to each other, then your chi-square is be very small. 1413

So chi-square is giving us a measure of how far apart the observed and expected frequencies are, also I 1422

want to see that the chi-square cannot be negative. 1434

First of all because were squaring this difference right so the numerator cannot be negative not only that 1439

the expected frequencies also cannot be negative because we are counting up how many things we have , 1445

how many things we observed and so this also cannot be negative so this whole thing cannot be negative. 1451

So already we see in our mind the chi-square distribution will probably be positive and positively skewed 1457

because it stops at zero there is a wall at zero.1465

Okay so now let us actually talk and draw the chi-square distribution so imagine having some sort of data 1470

set and taking from it over and over again samples so you take a sample and so have this big data set, you 1479

take the sample and you calculate the chi-square statistic and you plot that. 1487

And then you put that back in you take another sample and you take the chi-square plotted again and do 1493

that over and over and over and over again. 1502

You will never get a value that is below zero and you will get values that might be way higher than zero 1505

sometimes but for the most part though be clustered over here so you will get a skewed distribution and 1514

indeed the chi-square distribution is a skewed distribution. 1520

Now here when we look at this you might think, hey, that looks sort of like the F distribution and you are 1527

right overall and shape it looks just like the F distribution and in a lot of ways we could apply the reasoning1536

from the F distribution directly to the chi-square distribution. 1544

For instant in the chi-square distribution, our alpha is automatically one tailed it is only on one side and so1548

when we say something like alpha equals .05 this is what we mean, we mean that we will reject the null 1556

when we have a chi-square value that somewhere out here or here or here but we will fail to reject if we 1565

get a chi-square value in here from our sample. 1573

Now this chi-square distribution like the S and t-distribution, it is a family of distribution, not just one1576

distribution the only one that is just one distribution is the normal distribution. 1586

The chi-square distribution again depends on degrees of freedom and the degrees of freedom that the chi-1591

square depends on is going to be the number of categories -1 . 1598

So if you have a lot of categories the chi-square it will look distribution will look different if you have a small1608

number of the categories like 2, the chi-square distribution will look different. 1615

So let us talk about what Alpha means here. 1619

The alpha here is this set significance level we are going to say, we are going to use this as the boundary so1623

that if we have a chi-square from our sample that bigger than this boundary then we will reject the null. 1630

What is the difference now with P value? 1643

Now the P value said this is the probability so we might have a P value somewhere out here or we might 1647

have a P value somewhere here, the P value is going to be very similar to other hypothesis test what the P 1656

value means and other hypothesis test, basically is going to be the probability of getting a high square value1669

larger more extreme and in this case there is only one kind of extreme, positive larger than the one from our sample but under condition. 1681

Remember in this world which one is true? 1700

The null hypothesis is true. 1703

So considering if the null hypothesis were true this would be the probability of getting such an extreme chi-1712

square value , one that is that large or larger, that is all we need. 1720

So, in that way the P value is from our data while the alpha is not from our data it is it is just something we sat as the cut off. 1727

So there are some conditions that we need to know before we use the chi-square. 1737

When we use the chi-square we cannot just always use it, there are conditions that have to be met so one of the conditions of the chi-square is this. 1745

Each outcome in the population falls exactly into one of a fixed number of categories, so every time you 1756

have some sort of case from the population so let us say we are drying out votes. 1765

Each vote has to fall into one of a fixed number of categories so if it is two candidates, always two 1773

candidates for every single voter so we cannot compare voters that had two candidates versus voters who had three candidates. 1785

Also these have to be mutually exclusive categories, one vote cannot go to two candidates at ones so they 1792

have to be mutually exclusive, you got vote for A or vote for B. 1802

And you cannot opt out either, or else nobody has to be one of the fixed numbers of categories ahead of time. 1807

So the numbering is slightly off here but the second condition that must be met is that you must have a 1816

random sample from your population, that is just like all kinds of hypothesis testing though. 1826

Number 3, the expected frequency in each category so once you once you compute all the expected 1832

frequency in order to compute your chi-square, that needs to be each cell each square needs to have an 1840

expected frequency of five or greater, here is why. 1850

You need a big enough sample, if you have to small of the sample, again expected frequencies less than five 1854

also unique big enough proportions, so let us say you want to compare proportions that are like you know 1862

like one candidate is going to be predicted to win 99.999% of the votes and the other candidate is only 1871

supposed to win .001% of the vote and you only have five people in your sample. 1883

And so you need to also have big enough proportion and these balance each other out. 1890

If you have a large and a sample than your proportions can be smaller also, if you have large enough 1897

proportions in your sample could be smaller. 1903

And the final condition is not really condition it is just sort of something I wanted you to know at the rule. 1905

The chi-square goodness of fit test so that is always been talking about so far.1913

This test actually applies to more than two categories. 1920

You do not just have 2 categories, you have 3 or 4 or 5 or 6 but they do need to be mutually exclusive and 1927

each outcome in the population must be able to fall into any one of those. 1935

So those are the conditions. 1940

So now let us move on to some examples. 1943

So the first example is the problem that we already looked at so far according to early polls candidate A 1947

was supposed to win 63% of the vote and B was supposed to win 37%. 1953

When the votes are counted, A won 340 votes while B won 166 votes. 1958

One of the things that I like to do just to help myself is when I think of the null hypothesis, when I think of1967

the null hypothesis, I sort of write it in a sentence that the proportion of votes, that is my population, 1975

should be like predicted proportions, and the alternative is that at least one of the proportion of votes will not be like predicted population. 1990

What I also like to do is I like to draw this out for myself, I like to draw out the predicted population so I will2032

color candidate A in blue so that will be about 63%, candidate B will be in red, 37%. 2040

And so eventually I want to know whether this is reflected in my actual votes. 2053

The significance level we can set it up .05 just set of convention and we know that it has to be one tailed 2059

because this is definitely going to be a chi-square and we know it is a chi-square because it is about expected proportions. 2068

So now let us set our decision stage. 2075

Now our decision stage, it is helpful to draw that chi-square distribution and to sort of label it, for alpha2081

here this is our rejection region .05, now it would be nice to know what our critical chi-square is, and in 2100

order to find that we need degrees of freedom and degrees of freedom is the number of categories, in this 2111

case 2 -1 and that is 1° of freedom and it is because if you know let us say that candidate B won that is 2119

supposed to win 37% of the votes you could actually figure out candidate A like you do not need me to tell 2131

you what that is to figure it out and candidate A cannot vary, the proportion cannot very freely once you 2138

know this one and that is why it is number of categories – 1.2143

So now that we have that you might be useful to look at either in the back of your book or use XL 2148

spreadsheet Excel function in order to find our critical chi-square. 2156

So in order to find chi-square there are two functions that you need to know just like T this and T, F this and F in, now there is chi-this. 2161

Actually we need to use chi in right now because here we have the probability .05 and the degrees of 2182

freedom one and that will give us our critical chi-square and that is 3.84. 2190

So critical and so this is the boundary were looking for 3.84 so anything more extreme more positive than 2198

3.84 and were going to reject our null hypothesis. 2208

So now that our decision stage is set, now it is helpful to actually work with our population and remember 2214

when we talk about our population, should have left myself some room, when we talk about our actual sample here is what we ended having. 2221

We have observed frequencies already so for candidate A, I am going to write a column for observed in 2236

candidate B so candidate A, we observed 340 votes so that is our observed frequency for candidate B, we see 166 votes. 2243

Now one that helps is we know what the total number of votes was, so the total number of votes is going to be 340+166 and that is 506. 2261

So 506 people actually voted in this so down here I am going to write total 506. 2274

Now the question is what should our active frequencies have been? 2283

So here I am going to write expected and I know that my proportion of expected should be 63%. 2291

That means is that the total number of people who voted? 2298

So here is our little sample of 506 people. 2302

This is our 100% but here we have 506 people in our sample, we should expect 63% of 506 to have voted 2308

for A, and so how do we find that? 2323

Well we are going to multiply 63% to 506 to find out how many votes that little blue bit is and so that is 2328

going to be.63×506 that total amount. 2341

If we multiply 506 x 1 we would get 506 right?2350

So if we multiply by a little bit of a smaller proportion that we get just that chunk. 318.78 actually I am 2355

going to put this here, let me actually draw this little table right in here because that can help us do our 3939.1 finder chi-square much more quickly. 2367

And so observed expected frequency observed frequency at 340 and 166, okay. 2383

So what are the other expected frequency for B, so in order to find this little bit we are going to multiply2394

.37×506, so .37x506 and that is 187.22. 2401

And usually if you add this entire column that you should get roughly a similar total. 2414

When you do it, when you do these by hand sometimes you might not get exactly the same number it 2422

might be off by just a little bit because of a rounding error, if you round to the nearest 10th, round to the nearest integer, 2429

you make it a little bit around it here but you should be off by much so that one way you could check to see what you did was right. 2438

And so once we have this, so let me just copy these down right here so 318.78 and 187.22 for each of these 2445

the total is 506, so here, one of things we see is that the expected value for A are a little bit lower and the2463

expected values for B are little bit higher, but is this difference in proportion is that significant is that 2476

standing out enough, and in order to find that we need to find the chi-square, the sample chi-square. 2485

Now, we completely run out of room here. 2493

But I will just write the chi-square formula up here. 2497

So the chi-square is going to be the sum over all the categories of the observed frequency minus the 2500

expected square as a proportion of the expected frequency. 2510

And so what I am going to do is calculate this for each category, A and B and then add them up.2517

So right here I am going to call this a column, O minus E squared all over B. 2525

So I am going to do that for A and B and then sum them up. 2540

So, my observed minus expected squared all divided by expected and so here I get this proportion and I am 2547

just going to copy and paste that down here and then here I am just going to some them up and I get 3.817. 2565

We are really close but no cigar so where were right underneath so our sample chi-square is just a smidge 2577

smaller than our critical chi-square so here were not rejecting the null, we are going to fail to reject the2589

null, so let us find the P value so in order to find the P value you could use chi disc or alternatively look it up2597

in the back of your book, look for the chi-square distribution. 2609

It should be behind your normal, your T, your F and then chi-square should come right behind it, it usually goes in that order , maybe a slightly different order.2614

And our degrees of freedom remain the same one and so all our P value is just over .05, if we round, .51 right? 2627

So because of that we are not going to reject the null so we are going to say the proportions of votes are roughly similar to the predicted proportions. 2640

Well, they are not significantly different at least, they are not super similar but we cannot make a decision2657

about that but we can say they are not that different from, that they are not extremely different at least. 2663

Okay, example 2. A study ask college students could tell dog food apart from expensive liver pâté liverwurst and spam. 2669

All blended to the same consistency chilled and garnished with herbs and a lemon wedge, just to make it pretty. 2684

Students are asked to identify which was dog food. 2695

Researchers wanted to test the probability model where the students are randomly guessing. 2698

How would they cast their hypothesized model? 2703

Okay so see the download that shows how many students picked that item to be dog food, so it seems that 2707

college students have a bunch of different choices in dog food liver Patty, liverwurst and spam, and then 2714

they need to identify which was dog food so out of those, which of those is dog food? 2723

So it is sort of like a multiple-choice question. 2728

So if you hit example 2 in the download that listed below, you will see the number of students is selected that particular item as dog food. 2732

Now be careful because some people right here, remember, you will really get this problem on a test and you would not know that it is a chi-square problem. 2741

Sometimes people might immediately just think I will find the means and so they just go ahead and find the 2751

mean but then if you do find the mean, ask yourself, what does this mean? 2758

What is the idea or the concept? 2763

If we average this, we would find the average number of students that selected any of these items as dog 2768

food and that sort of a mean that does not make any sense right? 2775

And so before you know, go ahead and find the mean, ask yourself whether the mean is actually meaningful. 2779

So here we know that the chi-square because the students are choosing something and it is a categorical choice. 2788

They are not giving you an answer like 20 inches or 50° or I got 10 questions correct right? 2798

They are actually just saying, that one is dog food and they have five different choices and they have 2804

chosen one of them as dog food so out of five choices of probability model that are just guessing would 2813

mean that 20% of the time they should pick pâté, once we dog food, 20% of the time don't expand to be 2821

dog food 20% of the time to pick dog food to be dog food and so on and so forth. 2828

So let us try that probability model and by model we also need null hypothesis. 2835

Model or hypothesized population so step one. 2844

So the null hypothesis is the idea that they will fit into this picture so this is the population, and it is out of2848

100% and they have five choices of pictures just lightly un even, it helps really draw this is as well as you can, just as then it will help you reason to. 2858

That they will have a equal chance of guessing either one of these and there is two liver patties that is why there are 5 choices. 2878

So liver pâté 1, spam was next, then actual dog food just in the data set, patty 2 and a liverwurst. 2885

So these are the five choices and were saying look the students are just guessing they should have a 20% probability of each. 2909

Is this the right proportion for this sample, is the sample going to serve match that or be very different from this. 2923

The alternative is that at least one of the real proportion is different from predicted. 2938

So once we have that, we can set our alpha to be .05 our decision stage, could draw there chi-square and 2954

our degrees of freedom, we now have five categories and so our degrees of freedom is 5-1 which equals 4 2970

and it is because once we know four of this, that we could actually figure out the proportion for the fifth one just from knowing 4 of this. 2978

So that one is no longer free to vary, it does not have freedom anymore. 2987

So what is our critical chi-square? 2991

Well, if you want to pull up your Excel data, here I am just in a start off with step three, in step three we are2998

critical chi-square in order to find that we can use chi-in, put in the probability that were interested in and our degrees of freedom which is 4. 3011

And so our critical chi-square is 9.49. 3026

Noticed that as degrees of freedom goes up, what is happening to the chi distribution is that it is getting 3035

fatter it is getting more variable and because of that we need a more extreme chi-square value. 3053

So that is sort of different than like T distributions or F distribution. 3059

Those distributions got sharper when we increased our degrees of freedom , chi distributions were the opposite way. 3066

Those district chi distributions are getting more variable as degrees of freedom goes up. 3075

So once we have this now we could start working on our actual data, our actual samples. 3080

So step four is we need to find a sample chi-square and in order to do that it helps to draw out that table so3089

the table might look something like this. 3102

I will just copy this down here and this is the type of food, so that is the category and here we have our observed frequencies. 3106

The actual number of students that pick that thing to be dog food. 3125

So here we seen one student pick pâté, one to be dog food, 15 students picked liverwurst to be the dog food. 3130

What are the expected frequencies? 3138

Well in order to find expected frequencies we know that the expected proportions are going to be .2 all the way down.3142

20% 20% 20% 20% and here I am just going to total this up. 3153

And I see that 34 students were asked this question. 3161

Are expected frequencies should add up to about 34? 3170

Are expected proportions adds up to one? 3175

And that is why we cannot just directly compare these two things, they are not in the same sort of currency 3179

yet, you sort of have to change this currency into frequency. 3184

So how do we do that? 3189

Well we imagine here are all 34 students take 20% of them, how many students will that be? 3192

So that is 0.2×34, this times 34. 3199

And I am just going to lockdown that 34 because that total sum would not change. 3207

So, this is what we should expect that if they were indeed guessing, this is the expected frequencies that 3214

we should see and if I just move that over here , we will see that that also at the column also add up to 34. 3226

Now once we have that we can compute our actual chi-square because remember that observed frequency 3233

minus expected square divided by expected as a proportion of expected. 3240

So, that is the observed frequency minus expected frequency squared divided by the expected frequency. 3247

And I could take that down for each row and then add those up and here I get my chi-square statistic for 3257

my sample and so my sample chi-square is going to be 16.29, and that is the larger more extreme chi-3268

square than my critical chi-square, and let's also find P value here. 3281

In order to find P value I could use chi-disc, here I put in my chi-square and my degrees of freedom which is 4. 3286

And so that is .003 and that is certainly smaller than .05 and so in step five, we reject the null. 3297

Now I just want to make a comment here. 3315

Notice that here, after we do the chi-square although we reject the null just like in the ANOVA we do not 3318

actually know which of the categories is the one that is really off. 3325

This one here, we can sort of see, this one probably seems to be the most off but we are just eyeballing it,3330

were not using actual statistical principles. 3340

So once you reject the null there is a post hoc test that you could do but we are not going to cover those here. 3343

So it seems that students are not randomly guessing they actually have a preference for something as being dog food. 3349

My guess is liverwurst. 3362

So example 3 which of these statements describe properties of the chi-square goodness of fit test? 3365

So if you switch the order of categories the value of the test statistic does not change, that is actually true it3376

does not matter whether candidate A got added before candidate B addition is totally order insensitive you 3383

could add A or B or B on A, you can add pâté or liverwurst and dog food or dog food the liverwurst and 3391

pate, it does not really matter so this is actually true, as a true property. 3398

Observed frequencies are always whole members that is also actually true because when you observe of 3403

the frequency, you are actually counting how many category numbers you have so counting is going to be made up of whole numbers. 3410

Expected frequencies are always whole numbers, that is actually not true, expected frequencies are predicted frequencies. 3418

It is not that at any one time you will have plenty student saying that liverwurst is dog food but it is that on3427

average that is what you would predict given a certain proportion and so this is actually not true, expected3435

frequencies do not have to be whole numbers because they are theoretical, they are not actually things that we counted up in real life. 3445

A high value of chi-square indicates high level of agreement between observed frequencies and the expected frequencies. 3452

Actually if you think about the chi-square statistic, this is actually the opposite of what is the real case. 3462

If we had a high level of agreement this number would be very small and because this numerator is small 3472

the chi-square would also be small, a high value of chi-square would actually mean that this is quite large 3479

compared to this and so this is actually also wrong, the opposite. 3486

So that is it for chi-square goodness of fit test, join us next time on educator.com for chi-square test of homogeneity.3494

Hi, welcome to educator.com. 0002

We are going to talk about the chi-square test of homogeneity. 0002

Previously we talked about the chi-square goodness of fit test now were in a contrast that with this new test is still 0018.3 chi-square test but it is a test of homogeneity now. 0005

We are going to try and figure out when do we use which test. 0022

The test we are testing a new idea , we are not testing goodness of that would actually testing homogeneity similar. 0027

We actually have slightly different null hypotheses and alternative null and alternative hypotheses . 0035

We are going to talk about how those have changed then we are going to go over the chi-square statistic and also finding 0051.0 the expected values is going to be a little bit different in test of homogeneity . 0041

Finally working to go through chi-square distributions as well as degrees of freedom and the conditions for the test of homogeneity, 0055

one can you actually care conduct this test service statistically legally. 0065

Okay so the first thing is what is the difference between the test of homogeneity and test of goodness of fit? 0069

Well in the goodness of fit hypothesis testing we wanted to determine whether sample proportions are very different from hypothesized 0082

population proportion one way you could think about this is that you have one sample and you are comparing it to some hypothetical population. 0089

In test of homogeneity and I called it goodness of fit, it is about how well these two things fit together. 0098

How well does the sample fit with the hypothesized proportion. 0108

In test of homogeneity homogeneous means similar right, that they are made up of the same stuff. 0112

In test of homogeneity we want to determine whether 2 populations that are sorted into categories share the same proportions or not. 0120

And here you could also substitute this word population here because ultimately were using the sample as a proxy for the population. 0130

So here we have 2 population and we want to know whether those two populations are similar in their proportions or not 0142

right were not comparing them to some hypothesized population were comparing them to each other. 0152

And so really you can think of this as an analogy you think of the their relationship by using an analogy from the 0159

one sample to the independent samples t-test. 0167

In the one sample t-test we had one sample and we compared it to the null hypothesis right? 0170

That was when we would have null hypotheses such as new equals zero or new equals 200 or new equals -5 versus an independent sample. 0176

We had 2 samples and we wanted to know how similar they were to each other right or how different 0190

they were from each other and our null hypothesis was changed to something like use of X bar minus Y bar equals zero right, 0198

that they are either made up of the same mean or different means. 0208

And in a in a similar way the goodness of fit chi-square is really asking whether this proportion in my sample 0213

is similar to the proportion in our population. 0229

So that is how I am comparing , this is my null hypothesis in some ways . 0232

In our inner test of homogeneity we have 2 sample 2 population 2 sample that come from 2 unknown population and we want to know 0240

whether these have similar proportions to each other and so that is going to be our null hypothesis that these have the same proportion or have different one. 0255

For null hypotheses is similar proportion. 0267

And so in that way I hope you could see that goodness of fit in homogeneity their ideas that we have looked at before 0275

comparing one sample to a hypothesized population or comparing two samples to each other but we have looked at it before 0285

not with proportion but with means, right? 0294

And now are looking at it with proportion okay since you are looking at proportion we should have hypotheses about 0297

proportion so the null hypotheses with something like this the proportion of all the each category the proportion that 0305

all into each category is the same for each population so however many categories you have so let us say we have 0313

in a three categories. 0322

If we believe that they are the same and they should roughly have the same proportion so these have similar proportion. 0341

It does not actually matter what the proportions are it could be 90, 10 could be 10,10 it could be 75 20 like when the proportions 0347

that were think there similar for each population and whatever 780 whatever category is 75% of the population 0360

that category will also be 75% of the population. 0368

The alternative hypothesis says that for at least one category the populations do not have the same proportion so just like before 0371

were now talking about differences that the differences are really in the proportions the predicted the populations proportion. 0383

So just to give you an example. 0394

Here is the problem and let us try to change it into the null hypothesis as well as alternative hypothesis. 0396

So according to a poll for and six Democrats said they were very satisfied with candidate A while 510 were unsatisfied 0401

however 910 Republicans were satisfied with candidate a while 60 were not. 0410

And in a chi-square test of homogeneity we could see whether the proportions of Democrats and Republicans that Democrats were satisfied are 0415

similar to the proportions were Republican of Republicans were satisfied versus unsatisfied. 0427

So let us draw this out first. 0436

So here we have about 400 Democrats saying there satisfied while 500 saying unsatisfied. 0439

Let put satisfied in blue and so that is a little bit less than half and the unsatisfied people are a little bit 0451

more than half so this is the Democratic population that they look like. 0460

The Republican population looks very different so here we see most of the Republicans being pretty satisfied and 0467

only a very small minority being unsatisfied right. 0479

And so the question is are these two are the two similar are the proportions that fall into each category 0483

satisfied or unsatisfied the same for each population? 0493

Are they different? 0497

The null hypothesis would probably say something like this. 0498

The proportion of satisfied and unsatisfied people like us are similar are the same for Dans as well as republicans. 0501

The alternative hypothesis says for at least one category either satisfied or unsatisfied, Dans and Republicans do not have the same proportion. 0531

Okay so note that in the case of 2, once category changes once the proportion of one category changes the other one automatically changes.0561

So if we somehow were able to change has satisfied the Democrats were with candidate A, we would also see the 0584

proportion of unsatisfied people just automatically change. 0592

So that is in the case of two categories but in the case of multiple categories maybe 2 might change but the others may 0595

not change right so in that way this would be a more general way of saying alternative hypothesis. 0606

Now let us talk about the chi-square statistic. 0612

Now the nice thing about the chi-square statistic is that it is the same as the goodness of fit test. 0616

We use the same idea so chi-square is going to be observed frequencies and the difference between that and 0621

expected frequencies where over the proportion of expected frequency. 0631

But there is just one subtle difference before it was for each category. 0638

Now we have different categories in different population right so we not only have like category 1 and category 2 0643

category 3 so on and so forth but we also have population 1 and population 2 at least right? 0651

And so we have multiple of observed frequencies and so what do we do right? 0659

Well what we do here is that we consider each of these combination of which population your in and which category 0668

are talking about each of these are going to be called cells. 0681

And so we do this for each cell so I will go from one of to the number of cells. 0686

And how do we get the number of cells? 0694

Well the number of cells is really how many population right and that is usually shown in columns times how many categories. 0701

And that is usually shown in rows, you can also think of the number of cells as columns times rows, how many columns you have times the number of rows. 0718

But really the idea comes from how many different populations your comparing of chi-square test of homogeneity 0733

actually compare three or four population not just 2 and how many categories you are comparing. 0739

So in order to use the chi-square formula, it is often helpful to set up your data in a particular way often 0747

though that often these formulas will refer to rows and columns and so you really need to have the right data in 0758

the rows and the right data columns in order for any of these formulas to be used correctly. 0764

So how to set up your data in this way? 0769

Whatever your sample one is you want to put that all of the information for sample one into a column, right so 0772

here I put sample 1 at the generic sample one it could be college freshmen are Democrats or mice got a certain 0780

drive whatever it is the sample one and these are the people in sample 1 who fell into category one. 0788

These are the people in sample 1 who fell in to category two and these are called cells. 0798

When you add these frequency that you should get the total number of people in sample 1 right so in that way all 0804

the information from 1 one is in a column. 0814

Same thing with sample 2 all the information from sample 2 should be in a column. 0818

This should be the entire sample broken up into those that fell into category 1 versus category two and then the0823

total gives you the total number of cases in sample 2. 0830

If you had sample three and four they would follow that same pattern and all the information should be in one column. 0836

On the flip side when you look at rows you should be able to count of how many people how many cases were in category one. 0843

And so if you count them up this way this is a sample but it is just how many cases in the entire data set that you are looking at0855

are in category 1 and if you look across here this is how many cases in the entire data set fall into category 2 0868

and finally if you look at this total of totals what you should get is that is the entire data set all added up. 0878

So let us try that here with the Democrats and Republican example. 0889

So I am going to put Democrats appear Republicans appear satisfied and unsatisfied and all I need to do is make 0896

sure I find the correct information and put it into the correct cells. 0910

910 are satisfied 60 are not. 0916

When I add this up I should be able to get the number of how many Democrats total that are in the sample so this 0921

is 916 for Republicans this is 970 so we have slightly more people in a Republican sample than our Democrat sample and that is fine. 0929

If I add the rows up like this if I get the row totals what I should get is just a number of satisfied people. 0940

It does not matter whether their Democrats or Republicans so we should get 13, 16 and this should be 570. 0948

And if I add these two accession equal these 2 add being added outbreak of interest adding these four numbers up 0959

in a different order so that should be 1886. 0967

So we have 1886 in our total data set across both sample and we know how many people were satisfied , how many 0973

people are unsatisfied we also know how many Democrats we had how many Republicans we have and all the different combination right? 0990

Democrats are satisfied Democrats unsatisfied Republican satisfied Republicans unsatisfied. 0998

So this is a great way to set up your data that really can help you figure out expected frequency which is a 1003

little bit more complicated to figure out intensive homogeneity. 1009

Not too much complicated but just a little bit more. 1012

So here is how we can figure out expected frequency so once you have it set up in this way Democrats Republicans 1017

satisfied unsatisfied, once you have it set up in this way here is the formula used for expected frequency. 1026

So E is going to equal basically the proportion of people who are in one particular category. 1033

So I just want to know how people tend to be satisfied. 1042

I do not care whether their across a Republican, just in general who satisfied right so that would be the row 1046

total right so the row total over the grand total this one right here. 1053

This will give me the rates or the proportion of just the general rate of who satisfied who tends to be satisfied 1065

that 70% to be satisfied 20% to be satisfied 95% to be satisfied. 1077

What is the general rate and I am going to multiply that by the total number of the sample that I am interested in 1084

so maybe I am interested in the Democratic sample so I would get the column totals. 1092

So that is the general formula that will show you this in a more specific way so let us talk about the expected value of 1097

Democrats who are satisfied. 1107

Right so that would be the satisfied total over the grand total so this gives us the rates of being satisfied just 1110

in general what proportion of the entire data set is satisfied and then I am going to multiply that by however 1125

many Democrats I have so Democrat total. 1132

So I could write it in this way but what ends up is that this is just a more general way of saying this example. 1137

So when I say Democrats total is the same thing as being column totals. 1146

And when I say row total it is really the same thing as being satisfied total and the grand total is the total number in our data set. 1151

Democrats Republicans. 1162

So now let us talk about once you have the expected values you have the observed frequencies and now you could easily find chi-square. 1165

Once you get your chi-square how do you compare it to the chi-square distribution? 1176

Well the nice thing is the chi-square distribution looks the same as in the test at as in the goodness of fit test 1182

and so chi-square it has a wall at zero can not be lower than zero and it has a long positive tail and when you decide how much 1190

your alpha is and that is what it is going to look like Alpha is always one tailed in a chi-square distribution. 1202

But the question is how to find degrees of freedom now that we have rows and columns? 1208

Well the degrees of freedom is really going to be the degrees of freedom for category times the degrees of freedom for 1217

however many populations or sample that represent your population you have and that is going to be the number of rows 1229

right because each categories in a row -1 times the number of columns you have -1 so that is how you find you degrees of freedom 1238

when you have more than one population that you are comparing. 1248

So what are the conditions for the test of homogeneity? 1251

These conditions are to be very similar to the conditions for out goodness of fit testing so the first thing is 1258

each outcome of each population falls into exactly one of the fixed number of category. 1265

Well the categories are mutually exclusive just like before, you have to be in one or the other you cannot be into 2 categories1275

at the same time you cannot opt out of being in a category also the category choices must be the same for all population. 1280

So it went to one population has to have if they have three choices the same three choices must be the case for population 2. 1288

The 2nd requirement for condition is that you must have independent and random sample before in tests of goodness of fit 1298

we only have this requirement that the sample have to be branded because we only had one sample. 1310

Now we have multiple samples and they must be independent of each other they cannot they cannot come from the same pool. 1315

So third condition the expected frequency in each cell is five or greater and not just is the same condition that we had 1325

for goodness of fit testing it is because you want a big a sample as well as the big enough proportion. 1337

And number four is not really a condition is just so that you know how free you are with chi-square testing you can have 1344

more than two categories and you can have more than two populations you could have 4 categories and six population so you 1355

should have a whole bunch of these different combination so you are not restricted to 2 categories and 2 population. 1364

So now let us go on to some examples. 1371

Example 1 is just the example we have been using to talk about how to find how to set up your data and how to find 1376

expected values so I set this up in an Excel file this is just exactly the same way we set it up previously I just found 1383

the row totals as well as the column totals. 1397

And now I could start of my hypothesis testing so first things first. 1400

Step one our null hypothesis should say something like this that the proportions of satisfied and unsatisfied people minus adults 1406

for Democrats should be the same as for Republican so the proportion of category one and two of satisfied and 1425

unsatisfied by Allstate voters should be similar for Democrat and Republican. 1435

So the alternative hypothesis is that at least one of those proportion will be different between Democrats and Republicans. 1446

Step two, just set our alpha to be .05 and we know that because we are doing chi-square hypothesis testing is one 1461

tailed step three you might want to draw a chi-square distribution for yourself or just in your head and certain 1476

color and that alpha part and try to think. 1485

I want to find my critical chi-square. 1488

In order to find the critical chi-square I need to find the degrees of freedom. 1493

And my degrees of freedom is going to be made up of the degrees of freedom for categories as well as the degree of 1499

nfreedom for population and there is two populations so it is 2-1 and you could also see that as the columns 2 column – 1. 1509

And the degrees of freedom for number of categories is with two categories that is satisfied and unsatisfied -1 1521

and there that corresponds perfectly to number of rows -1 and so the degrees of freedom here is going to be that 1535

this times this so degrees of freedom for category times degrees of freedom for population and is just one. 1545

So, what is our critical chi-square, but that is going to be found by chi in we put in our probability as well as 1553

our degrees of freedom and we find 3.84 is our chi-square critical chi-square. 1564

So we are looking for sample that represent population sample chi-square is that are larger than 3.84. 1571

Step four look something like this so in order to find your sample chi-square what we need to do first is find our 1584

expected values so here we have observed frequency and what we need to do is find infected frequency. 1595

So I am just going to copy and paste this down here so we do not have to keep scrolling and so I am going to draw 1609

a director at the table here for observed frequency and create the same table for expected frequency. 1623

Okay so when I look at my expected frequency I need to find out what is the general rate and then multiply it by 1635

however many however many industry people have in that sample so the general rate of being satisfied is 1316÷1886 1651

so that the general rate and that is about 70%. 1670

Take that and multiply that by the total number of Democrats. 1674

Now this part I want to keep that the same and I want to keep that in the same column so I am going to put $ affinity 1680

to walk down that column and here I am going to put $ in front of both the D and the 21 in order to lock down this actual cell. 1697

Because here is what I am going to do I am than actually copy and paste that over here and if look at this then what I am doing1708

is I have this same rates again the rate of being satisfied but now it is multiplied by the number of total Republicans. 1716

And I am going to take that cell copy and paste it down here and here I see that now I have the rates of being 1726

unsatisfied and they need to change this to that and here I have the rates of being unsatisfied and then 1737

multiplied by total number of Republican so these are my expected frequencies. 1750

Notice that the total still add up to be the same right and usually it should there might be some slight discrepancies1756

but that will just be because of rounding error so they should still be pretty close. 1766

So now we have observed frequencies as well as expected frequencies and now we need to figure out my chi-square. 1771

My chi-square is going to be made up of observed frequency minus expected frequency squared divided by expected frequency. 1779

And I am going to need to find that for Democrat Republican as well as satisfied and unsatisfied and then add off all of these cells. 1790

So I will see grand total and I will put that over here. 1808

Okay so let us find the observed frequency minus the expected frequency squared divided by expected frequency. 1813

And I could just copy and paste that here because Excel will just move everything down and I can take this over here because Excel 1829

will move everything over to the right. 1841

And the grand total for all four of these is going to be 547.18 and so my sample chi-square is quite large.1843

And so do I reject my no hypothesis? 1876

Indeed I do and we can find the P value so here I will put chi disc in order to find my probability. 1881

Here it is, degrees of freedom is going to be one and that is a very very very small P value so that is the pretty radically1898

different population that we set in there. 1911

And if you want to step five, example 2. 1917

Consider this data on pesticide residue on domestic and imported fruits. 1933

Does this data fit the conditions of a chi-square test of homogeneity regardless of your answer conduct hypothesis tests. 1937

Now be careful here although you see column and rows these are not the columns and rows you should be using the columns are 1944

actually okay domestic roads imported roads we could consider those two to be the different populations that are interested in. 1956

But the roads actually do not show the different categories such as sample size percentage showing no residue and percentage showing residue in violation right? 1964

So what we should do is we should actually transform this data into sort of the correct setup. 1975

So here you could just pull up a brand-new XL file just been a user of the bottom portion here and here is what we want. 1983

We would like it to be set up so that we have the two populations appear and we have the different categories here 2005

so the categories are probably going to be showing no residue showing residue in violation but one of the things I 2028

noticed is that these percentages do not add up to 100 that there must be some other category that were missing. 2035

So no residue showing residue in violation of the law so I guess that is really bad and maybe there is just one 2042

word it is residue but not in violation and you sort of have to figure that out from the data that they have given you. 2054

But they do give you the sample size 344 as well as 1136 so this is the total. 2063

The question is what are our observed value? 2073

In order to find observed value all we have to do is multiply but the proportion so 44.2% times the total. 2079

Here I walk down that row, now residue in violation what I have to do is to change this percentage so the percentage is .9%.2098

So that is .009 so that is .9%. 2116

And so what sort of leftover? 2127

Well, the leftover percentages is 1-.442 + .009 right so that sort of everybody else and that is I guess the 2131

number of fruits that are not in violation but still have some residue on them, some pesticide residue times this. 2143

And so when I add them all up I could check and that is 344 so I have done my proportions correctly. 2154

Now right away we could see that were actually not meeting the conditions for chi-square. 2169

If you look at this cell right here that has that only has three fruits in it even if we round up generously it is 3.1 right? 2176

So there is only three fruits. 2188

Remember expected frequencies have to have at least 5, so here the observed value is pretty small. 2191

Okay so that it said go ahead into hypothesis testing anyway you should not do this in real life but 2200

for the purpose of this exercise let us do it. 2210

So now let us find the proportion of imported fruits that are observed to have no residue on them. 2212

So that 70% 70.4% times this total and that is almost 800 fruits. 2222

Also we have those that have residue in violation .036 that is 3.6% times 1136, about 41 fruits and then 2232

I need the leftover percentage , so that is 1-.70% 74.4% +3.6% . 2249

That percentage times the total. 2262

And that is 295 right? 2268

So first notice that these seem like there is way more of these imported fruit than domestic fruits but that is because the 2272

totals are different so it does not necessarily mean that imported fruits they have so much residue on them, 2280

that is not necessarily what it means, but that is hard to compare because they have totally different totals. 2289

So it is helpful to find the row totals as well because that can help us find expected value expected frequency 2299

and so that is adding these rows together and we have a total of 1480 fruits Domestic and imported altogether. 2308

Once we have that then it would be easy for us to find expected frequency and expected frequency we could basically set up in a very similar way. 2329

So what is our expected frequency? 2346

Well,expected frequency is generally how frequent with the proportion of no residue over all the fruits right. 2362

So that will be this row totals divided by the grand total that is the general rates and we want to lockdown this row 2370

because we want to lock those two values down because and that is always going to be the rate for no residue 2383

times the actual number of domestic fruits. 2401

So we get 221 and here we do the same thing and I just copied and pasted across an Excel will just naturally you figure out what to do. 2410

So this is the rate of no residue over total fruits times the total number of imported fruits. 2428

Then we find there the rates of fruits that have residue but are not in violation which is this total over the grand total. 2436

And then I am going to lockdown those values and then I am going to multiply that by the total number of domestic fruit. 2449

And then if I copy that over that should give me the total number of imported fruits expected value of imported fruits given this proportion. 2467

And finally the proportion of fruits with residue in violation so a lot of pesticide residue that would be this total 2476

divided by the grand total times the total. 2489

And here what we can see is if we sum these three expected frequency together we should get something similar to 344. 2502

And indeed we do and here we should be 1136 and indeed we do great. 2515

So once we have our table of observed frequencies as well as expected frequencies now we can start to calculate 2522

for each cell the observed frequency minus expected frequencies where as a proportion of expected frequency. 2530

So O minus E squared as a proportion of expected frequency so I will copy this cell labels so observed frequency 2540

minus expected frequency squared divided by expected frequency , and just copy and paste all that let us check one of this. 2558

This one says that observed frequency minus expected frequency squared over expected frequency. 2573

And when we add all of these up we get 102 but we have forgotten the difference as we forgot to make a decision stage2581

so let us go ahead and do step three. 2599

So the decision stage will be our critical chi-square and our critical chi-square sound with degrees of freedom 2601

of the categories times the degrees of freedom of the population multiplied together so the other degrees of freedom for the chi-square. 2610

So categories -1 is 2, population -1 is 1, so the degrees of freedom is just 2, so our critical chi-square is chi in. 2628

Put in .05 as our desired probability, our degrees of freedom equals 2 and we get 5.99. 2646

We see that our chi-square is much larger than that so we would reject our null.2653

Hi, welcome to educator.com. 0000

Today we are going to overview all the statistical tests we covered so far. 0002

So this is the last lesson in this series.0008

We are first going to list all the statistical tests that we covered. 0011

In particular we are going to cover the hypothesis test. 0016

We are going to organized them into a chart so that you can tell which test was performed by looking at a set of results.0020

So here is a giant list of hypothesis test that we covered so far.0031

Other one sample z-test, the one sample t-test, independent samples T paired samples T one-way ANOVA 0036

also called the independent samples ANOVA, repeated measures ANOVA chi-square goodness of fit chi-squared test of homogeneity.0044

In more advance statistics courses, you may undercover also cover hypothesis testing with regression. 0055

It does exist however we have not covered it in the set of lesson. 0062

So the question is how do we know which of these tests that we should perform when we see a set of data 0069

or how you look at a set of results and figure out which is the test that they did in order to come up to this result. 0076

It actually helped to organize all of this different type in this table right here so there is a couple of dimension. 0084

One dimension is how many samples you have, so one sample test, 2 sample tests and more than two sample test. 0093

Now these hypothesis tests are all similar and that they all require at least one sample and because of that 0102

they might also be called having a categorical independent variable so that is what they all have in common.0113

But they have different levels of the independent variable. 0121

So this only has one level that has two levels and this has more than two levels. 0125

But also we need to know what is the measurement what is the dependent variable that they are interested in. 0132

There might be categorical dependent variables such as are they satisfied or unsatisfied. 0139

Did they pick red blue or green or there might be continuous dependent variables. 0145

How much did they improve on a test how fast were they going how many inches did they grow? 0153

Different DVs like that had a numerical value were we can find the mean as well as the variance and standard deviation. 0163

When we have categorical DVs such as yes and no where red blue and green we cannot find the meaning of those kind of value. 0172

So let us start organizing our test. 0183

When we think about one sample test there a couple of one sample tests we have talked about already. 0186

Some of them literary have the word one sample in their title such as the one sample Z test and the one simple t-test. 0191

The one sample Z -test and one simple t-test obviously use the mean as well as standard error which is 0199

calculated by tabulating standard deviation of the sample so that would fall into the continuous dependent variable box right here.0207

So there is the one sample Z as well as the one sample T. 0218

How you know when to perform the one sample z-test versus the one simple t-test well you know how to do that if you know Sigma. 0227

So if sigma is known the actual population standard deviation then you go ahead and use the one sample Z- test. 0238

If sigma is unknown a.k.a. you have to use S instead then use the one sample t-test and that is because the 0247

T is more variable and it is much more like the normal distribution as N your sample size becomes greater and greater. 0267

How about the categorical DV which is the one sample tests that we could put in here, well the categorical 0278

DV that we have looked at are all called chi-squared test. 0288

So there is a chi-squared test which might be written as chi-square or chi-squared there is a chi-squared test 0292

that only uses one sample and compares it to a population but here they take that one sample and look at 0305

the samples proportion and see if that matches the population’s proportion. 0313

That test is called the goodness of fit test because that goodness of fit is looking at how the sample fit with the population, goodness of fit. 0319

So, we have already tick-tuck three tests. 0330

Now let us talk about two sample test, when there is two examples and we often want to look at whether 0338

those samples are similar in that, the new of one minus the new other equals zero or we want to look at 0349

whether they are different in that the means of these populations do not equal each other . 0358

Those tests are called t-test. 0365

Right so the two sample t-test and obviously t-tests require calculating a T which requires mean standard error standard deviation so does t-test belong in here. 0370

So the first t-test we learned about where the independent samples t-test, as well as the paired samples t-test. 0384

This is both t-tests that take into account 2 sample and they have a continuous dependent variable. 0402

How do we know which one to use well you has to check for whether the samples are actually independent? 0414

If the samples are independent use the independent samples t-test sort of a no-brainer. 0421

If the samples are linked in some way then use the paired samples so with independent samples use the 0426

independent samples t-test with link samples use the paired samples t-test. 0435

Linked or dependent samples. 0442

Now what about when you have a categorical DV and you have more than one sample you can no longer 0446

use the chi-square goodness of fit test instead you have to use the chi-square test of homogeneity . 0453

This test whether 2 population are similar to each other in terms of their proportion or not just like the t-test0461

look at whether 2 sample are similar to each other in terms of their means or not and so in that way these tests all have that in common. 0480

What they have different from each other that's different from each other is that this chi-square use categorical DV and the t-test use continuous DV. 0491

So what about if we have more than one sample. 0501

Well actually if we had more than one sample and we have a categorical DV we can continue to use the chi0504

square test of homogeneity because here we can use it for two sample 3 sample whatever however many 0510

samples you like as long as it is not one so we could just say chi-square tests of homogeneity, and life is simple. 0517

However if you have a continuous DV now you can use t-test anymore because T-test only compare 0530

two distribution now we need to compare multiple distribution how do we do that. 0538

We use the F test also called ANOVA analysis of variance. 0544

So there are two kinds of analysis of variance test that you learned. 0549

One was the independent samples ANOVA and the other was the repeated measures ANOVA. 0553

How do you know which one to use, well it is just like this separation right here with independent samples 0568

use the independent samples ANOVA with link samples or dependent samples you use the repeated measures ANOVA. 0581

So that is how we know which test to do so we could look at a set of data look at whether it had 0589

continuous DV or not look at whether has to samples one sample more than one sample and we could 0597

follow this chart to figure out which tests should be performed and which does we can perform. 0603

So now let us practice. 0609

The following data are from OkCupid, an Internet dating website that does a lot of cool things with the data. 0613

So you could check out the blog at blog.okcupid.com and many of these figures are adapted from that website. 0621

The following data may be offensive to some of you because some of the data to mention sex and some of the data mention cleavage. 0630

Example 1 so here is a statistical conclusion and we need to figure out what statistical tests we should do. 0637

The statistical conclusion is this. 0647

The weird MySpace angle profile photo the one it looks like this, that results in more messages than other 0651

photo contacts, so here are the different photo contacts, things like my space shot in bed, outdoors travel 0659

with friends and the dependent variable is the new contacts monthly. 0666

How many new contacts they have per month so these are my two variables, photo contacts as well as number of contacts monthly. 0672

My number of contacts is my dependent variable and my photo contacts this happens to be my multiple groups, my different samples right. 0688

So I have a sample of people who has this as their profile shot this is their profile shot is that their profile shot. 0704

So these are my sample here and I have eight samples with continuous DV so which statistical tests should be performed? 0711

Well it should be an independent samples ANOVA because we have more than two group, 2 groups and are devious continuous. 0722

So we can analyze the variance between the groups over as a ratio of the variance within the groups. 0734

So example 2, use the statistical conclusion straight and bisexual men are more likely to believe they are geniuses than gay man. 0747

What are the variables and which statistical tests should be performed? 0760

So they are comparing three different groups of men bisexual men gay men and straight man so that things 0764

like samples already and what they are asking them is just yes or no. 0773

Do you think you are a genius are you a genius , yes or no, that is a categorical variable and the we have a 0778

categorical dependent variable so what statistical tests should be performed? 0785

Well, three groups in a categorical dependent variable this seems like this seems to call for the chi-square test of homogeneity. 0792

We want to know whether these three different samples have similar proportion or different proportion. 0802

Example 3 the statistical conclusion says this. 0813

Both male and female iPhone users are more promiscuous than blackberry and android users. 0823

So what are the variables and which statistical tests should be performed? 0829

This is actually a little bit of a trick question. 0834

You can answer the best of your ability but I'll show you how to go one step beyond what we actually know, okay. 0837

So one thing we could do is just compare these three groups of three groups of cell phone users so that 0844

seems like three samples to me that are independent. 0852

Usually people do not have more than one cell phone and this looks like the average number of sexual 0855

partners at age 30 so this is the bar graph right here not a histogram which should be a frequency 0862

distribution and this seems like a continuous dependent variable. 0868

After all in order to compete an average you have to have a continuous variable so we have a continuous 0875

DV with three groups of cell phone users. 0882

The one answer that we could come up with is to say perhaps the one one-way ANOVA also called 0885

independent samples ANOVA but and that would be a good answer given what we have learned so far. 0895

Hopefully you will have learned enough about statistics that you can take multivariate statistics which is sort of the next level . 0909

In the next level which you will learn about in when you have more than more than two independent variables. 0915

Here we have independent variable of cell phone as well as the independent variable of gender and when you cross them together we get six groups. 0923

Android users were male android users who are female and blackberry users or male and blackberry users who are female, iPhone male, iPhone female. 0934

With six different groups now later on when you look at this factorial ANOVA, they can actually almost like doing 2 ANOVA at the same time.0946

And so this would actually technically be a factorial ANOVA but if you can answer the nova you are pretty close. 0958

So example 4, older women cleavage pictures are associated with greater improvement in monthly contact them for younger women. 0966

Okay so one of the ways we can look at this is looking at age and we can look at the difference between this 0978

as the dependent variable and that is definitely continuous and we can look at the difference here as well and compared those 2 differences. 0987

Just at age 18 and age 32 so we looked at these two groups of women that so the 18-year-old women and the 32-year-old women. 0997

We look at those two groups of women and look at the DV of improvement how much improvements what kind of test would we do? 1008

Well it seems as though we should do a t-test of some sort because this is a continuous variable and we 1021

have 2 groups and the groups seem independent. 1030

We cannot be 18 and 32 at the same time and I do not think they are following the 18-year-old until they 1033

become 32 so I do not think they are linked so it seemed like an independent samples t-test. 1041

But there are other ways you can look at this, you can look at this as a regression correlation you can look 1046

at the regression line for women with women showing cleavage in light blue and women not showing 1061

cleavage in the dark, dark blue so you can look at those two regression line so that is another way that you could go on this. 1072

So that is the end for statistics on educator.com, thank you so much for watching.1083

Welcome to www.educator.com.0000

Today we are going to talk about samples and about cases, variables, and measurement within samples.0002

We need to talk about samples because statistics is all about data and data is made up of cases, right?0012

Each individual that is part of that data set is called the case, and cases are actually made up of variables.0019

You could think of variables as different characteristics within a case and a variable can take on different values.0028

Just to give you an example here is the data set that is simple and we have three cases, 3 shapes and they have different variables.0037

You can think of these as dimension.0048

Dimensions of shape, color, area, right?0051

These variables right up here, these can actually have different values.0056

For instance, triangle is the value for this case, for this variable of shape.0068

For this case, square is the value for the variable of shape and circle is the value for the variable shape for this case.0076

A variable can take on different values and because of that it is called a variable because it could vary.0091

It does not have to vary for instance take a look at color right here.0099

This is a variable that has all of the same values, teal.0103

Although they do not have to, in a variable the values do not have to vary, they can.0110

We could put a red case in there and it is okay.0119

One thing to note is that regardless of data sets oftentimes you will see cases listed in rows.0127

Often each row is the case.0134

Also often each column is the variable and you will learn about different kinds of variables as we go on.0136

When you look at columns you see variables.0144

When you look at entire rows you see cases.0147

Not only that but when you look at a cell, a cell is a combination of a particular row and a particular column.0151

When you look at a cell, that cell often contains a value.0159

The next two cases, variables, and values, as a small note about where they might be in space you might say usually in rows.0164

This one is usually in columns and values are usually in cell.0178

Does it always have to be the case but usually by convention many data sets are organized like this.0186

We can look here.0194

Here the cases seem to be made up of individuals.0195

Here the individuals are taken from www.facebook.com.0200

The variables are things like gender, friends, siblings, and number of tagged photos.0204

Tagged photos by itself is the variable, it could vary.0213

There are lots of different values that it could hold.0217

For instance 24, 42, and 21.0220

These are different values that could be sort of sitting in the place of the variable tagged photo.0224

Just to give you one more example, here is an example of aircrafts.0236

These cases are aircrafts and on each row there is information for this particular aircraft on that row.0241

The different variables here are number of seats, the cargo that it can carry in tens, and let us say average flying speed.0250

Here we could see that the B747 has 410 seats as the value for the variable number of seats.0262

Once again it is organized, rows being cases, columns being variable and cells being values.0272

I want to introduce one other idea.0285

Remember I said that variables can have different values, they do not have to differ but they can.0288

There are some characteristics that will not vary though because of a particular design of the study.0296

For instance, maybe a study would like to look at a pregnant women 0301

and how much prenatal exercise they do and whether that predicts the health of their baby.0306

Because of the design of this study, the variable gender is actually not going to be a variable 0316

because there are very few of them doing prenatal exercises because they are pregnant.0324

Instead this is what it is going to be called a constant because the values are all the same by design, are defined.0329

The question is great we know how to organize the data once we get it but how do we actually get that data?0340

The process of getting new data is called research and often research is taught with the five scientific steps and asking a question, 0348

coming up with the hypothesis, coming up with the design, research analysis, and coming up with the conclusion.0359

That sort of addresses that question.0366

In order to reframe the 5 steps of science so that it relates more to statistics 0369

I’m going to talk about these things in terms of cases because that is what is involved in statistics.0379

Research will be about how to get the sample.0388

Already we are putting in our statistics terms, how to get the sample.0400

The research question is often a proposed relationship among variables.0404

A hypothesis often goes with that so it is says yes I do think this is the relationship or I think there is another relationship. 0419

These often go together.0430

The research design is the procedure that we use for actually collecting the data.0432

Measurement is actually the process for gathering quantitative information that represents some variable or variables.0451

Let us say the quantitative values just to use the same words.0470

Values that represents or variables.0475

Here we are talking about how to actually get the sample.0493

We are looking at proposed relationships among variables within those cases.0496

Research design is all about the procedure for collecting that data.0502

Measurement is about gathering quantitative values that represents some variables.0507

Research analysis is what we often think of when we think of statistical analysis so I put statistics right here.0514

Here in statistics, there statistical analysis is going to have its own statistical question and hypothesis.0522

It is also going to have statistical procedures.0533

You are going to be able to come up with statistical conclusion.0540

Often this little mini set is often called hypothesis testing.0547

We will get to that when we talk about inferential statistics towards the middle and latter end of the course.0563

Finally the research conclusion is going to be different than the statistical conclusion.0572

Here in the research conclusion we step out again and go back to how this analysis relates to this overall research question.0577

This is the general conclusion.0588

This general conclusion is created from the statistical conclusion as well as in considering all that came ahead of you.0594

What kinds of variables are there if our research question and our hypotheses are all going to be made up of variables 0607

we better try to figure out what kind of variables could there be?0614

There are a couple of different variables that you need to know.0619

When we already covered this one is not a variable it is right outside the border in variable but it is related.0622

A constant is the characteristic that cannot vary in the data set.0630

For whatever reason it cannot vary but other than that they are two kinds of variables you need to know.0633

One is discrete variables and when we talk about discreteness, we are talking about things that have very particular values.0639

When you think about a number line there are only certain places that can contribute a value to a discrete variable.0650

These are the only values sort of allowed in a discrete variable.0665

Example might be something like number of siblings, you may wish you had only one and a half sibling but that is actually not possible.0670

Number of siblings is what we think of as a discrete variable.0680

You either have 1 or 2, you rarely have 1.65 or 1.82 number of siblings.0686

Also another example might be number of gold medals won in the Olympics.0695

Often people do not win just half a medal or 1/8 of a medal, or 5 2/6 of the medal.0706

Instead they win whole medals.0715

There is only particular place on the number line that can contribute values to these variables.0717

These are examples of discrete variables.0725

Continuous variables are exactly the opposite.0728

We might have these in a whole numbers like 1, 2, 3, 4 but when you have a continuous variable 0734

you could have this be the value or you could have this be the value or one right next to it as the value or over here as the value.0740

Any of these values can contribute to the variable.0748

One way you might want to think of this is that there are no gaps on the scale.0753

Any value can contribute, can be part of this variable.0763

In discrete variables only certain values can take part in this variable. 0769

Examples of continuous variables are things like length, weight, these are values that can have any number.0777

It does not have to be 100 or 101, it could be 100.1 or 100.001, or 100.0001.0794

There is an infinite even between 0 and 1, there is an infinite number of values that 0810

could contribute to a continuous variables such as length or weight.0816

Other possibilities are more abstract, things like anxiety level or knowledge of history.0822

Somebody could be maybe right here in terms of anxiety level but someone else could be very close but just less anxious in them.0833

These are what I thought of as continuous variables because any value is actually possible.0847

Here is the thing, we cannot actually quite get variables in the world.0858

We cannot get the batch true, instead we have to measure it and often measurements are almost all discrete.0864

When you actually measure something we often round, for instance when we measure height we do not measure it to the .0001 inch or centimeter, 0873

instead we often round it to the nearest whole unit.0885

Often people do not say I’m 5’6 and 375 of an inch.0891

Often people do not say that and because of that most measurements are actual scale of getting values of the variables.0901

Those end up turning all variables into discrete variables.0912

But underlying the variable, it does not have to be discrete just because we measure it in that way.0918

When a variable is measured you will end up with a particular set of numerical values.0925

That is often what we think of as our sample distribution, our scatter of numbers.0930

It often helps to ask ourselves what kind of scale is it on.0937

It is all going to be discrete but there are different levels of in formativeness that measurement scales can give us.0943

Let me give you some examples.0953

One reason that it might be helpful to think about what kind of measurement scale a piece of data is on is because it helps us compare pieces of data.0958

For instance could we look at number of friends and compare that to ranking in class.0968

Those numbers actually stand for very different ideas and that is what we mean by measurement scale.0975

What does the number mean?0983

What kind of information does it give us?0985

When we think of something like gender, here we are using a number 1 and 2 0988

but are we saying that somehow 2 males if you add them together you get a female?0994

Is that what we really think? Not really.1001

These numbers are just stand ins for other ideas.1004

When we are talking about number of friends, if we had somebody who has 48 friends, 1009

we do mean they have approximately 1/4 of the friends that the second person has.1015

Can we compare ranking in class?1021

Is this person somehow too better than this person? How do we compare?1025

It often helps to know what kind of measurement scale we are working with.1034

There are four different kinds of measurement scales you need to know. 1039

Here they are nominal, ordinal, interval, and ratio and I have listed them in an order where they become progressively more informative.1044

There is more and more information as we sort of go down.1054

These are the types of skills you might run into.1057

Nominal scales are often referred to as dummy codes because nominal scales are just numbers that stands for names.1061

The look on the surface like numbers but they are just names and the numbers do not actually have any meaning.1071

There is no meaning in the number, they just stand in like a dummy for a name or category.1079

Right so nominal scales stands for the idea name.1086

You can think of this is a qualitative scale, there is no order.1094

Some examples might be things like color of eyes, there is no order.1109

It is not that blue has to go before brown, or green has to come after brown.1113

There is no particular order to it.1117

Another idea that nominal scale is political affiliation or type of major.1121

These are nominal scales because it is not that there is any inherent order.1129

Even if we assign numbers to it, the numbers are just arbitrary, they do not actually mean anything.1132

Things like types of cheese, state that you come from, what language you speak, those are all examples of nominal measurements.1140

The second level we can think of measurement, it has a little bit more information.1152

It is no longer just a stand in, here we now have an order.1160

The numbers actually tell you about order but they may have uneven intervals.1166

1 and 2 are not the same distance apart as 2 and 3.1174

A good example of this is Olympic gold medal, silver medal, and bronze medal.1186

When we think of gold medal, silver medal, and bronze medal, and let us think of this is how the long jump.1192

The gold medal may have jumped this far.1206

The silver medal may have been very close.1210

But the bronze medal may have been far off.1213

But when they actually get their medals you cannot tell how far off each one was.1217

You do not know whether the intervals are the same or different.1223

Here we preserve order.1227

Now when we know the number 1 and 2 we know that number 1 definitely comes before number 2 1229

but we do not actually know the interval distance between them.1235

Other examples of ordinal scales are things like your rank in law school, 1240

that ranking number does not actually tell you how much better someone is than someone else.1246

They might be very close but their numbers might say they are one apart.1253

Often examples of things that are ordinal are often rank ordered.1261

Whenever you hear the word rank, that is often our ordinal scales.1266

Things like having a Masters degree, PhD or bachelors degree, those have ordinal scales.1272

They have order in terms of how much schooling you had to do but they do not necessarily have the same distance between them.1281

Now we get to interval scales and remember I said it is more and more informative as we go down, 1296

now we have order as well but also even intervals.1300

The distance between 1 and 2 is the same as the distance between 2 and 3.1307

When we have interval scales you might think that is like a regular number of line.1313

There is one thing that this scale is missing, although it has order and an even intervals there is no meaningful 0.1321

Here is what this means usually when we have a meaningful 0 then that would mean that when we say there is 0 of this, 1331

then there is literally none of whatever it is.1342

In an interval scale it is relative.1347

It does not matter whether you start marking out 1 or whether you start marking at 0, or whether you start marking at 125. 1350

Let me give you an example that is commonly used especially in the social sciences.1359

Often when people are asked about their opinion in self report, they are asked to rate something.1363

How happy do you feel on a scale of 1 to 5, 5 being very happy and 1 being not happy.1370

Would have it mattered if they had set the scale from 0 to 4 instead?1379

1, 2, 3, 4, 5 versus 0, 1, 2, 3, 4.1385

You could see that if someone marks the 5 on the scale and some of them marks a 4 on the scale.1393

It is not that this person is less happy, there are the ones who are maximally happy, right?1398

It is just that they had a different scale that they were using.1404

These are examples of interval scales where the 0 actually does not mean 0 of happiness, 1409

it is just whatever it is relative to the scale that you are using.1416

That is what we mean by no meaningful 0, you can often test for yourself whether something is a interval scale 1425

by moving the scale a little bit and seeing if it is still okay.1432

If it is okay then you know you have an interval scale.1439

Let us say you get something like another survey question that says how satisfied are you with your job?1440

You will rate it on a scale of 0 to 100.1447

If it was on a scale of 100 to 200, would it make any big difference?1452

Not really.1460

That is how you know that it is an interval scale.1461

Finally we get to the crème de la crème, this is the highest level and if interval is missing a meaningful 0 I bet you can guess what ratio has.1467

Here we have order, we have even intervals, and we have a meaningful 0.1478

In case these are ratio scales are often things like height or weight where 0 means 0, none of something, none of some unit.1491

If you are 0 inches tall that means you are 0, that means 0.1505

That is the big difference between nominal, ordinal, interval, and ratio scales.1515

Let us look at some examples to exercise these concepts.1523

Here we have a preschool, elementary, junior high school, college and graduate school, form what kind of scale.1529

Let us see preschool, elementary school, junior high, senior high, college, graduate school, they have an order, check.1538

Is there even intervals? 1549

The difference between preschool and elementary schools, preschool might take maybe 2 years and elementary school might take 6 years.1556

Even there along we could see they actually take different intervals.1564

Junior high might be 2 to 3 years, high school is 4 years, college 4 years, graduate school that could to be anywhere from 2 to 10 years right.1568

This definitely does not seem like they have even intervals.1581

And because of that even if we assign these things a number like 1, 2, 3, 4, 5, 6 it would not be that if we subtract that one it would be 0.1588

I would say there is no real 0 either .1603

Because it does have order, let us go with ordinal scale.1607

Example 2, in one state voters register as Republican, Democrat, or Independent, which scale of measurement is used?1617

Here is there an order to this like there was for the schooling?1625

Not really.1630

You may have a different opinion depending on your political leanings but these are just different categories of people.1631

I would say that this is a nominal scale.1639

Even if we assign numbers to it, they will be purely symbolic.1641

Example 3, a math professor gives students a 30 item test on the first day to ascertain his students basic math knowledge.1649

Bob got a 0, Joe got a 10, Carlos got 20 and Nate and Layla got a perfect score, what kind of a scale of measurement is this?1657

0 actually does sort of mean something if you think about it as how many items they got correctly.1668

And getting 1 item correct versus 2 item correct, this that ascertain their basic math knowledge?1677

Let us separate it out into first basic math knowledge.1688

Basic math knowledge is the actual variable that this professor is interested in.1696

Basic math knowledge is a continuous variable.1703

Somebody could have just a smidge more or just a smidge less than someone else so every value can be covered.1707

In order to get the values for this variable they used a certain kind of measurement.1717

He used a certain kind of measurement.1725

The measurement tool he used was this 30 item test.1729

The 30 item tests what kind of measurement scale is this on?1735

I would say it does have a true 0, 0 does mean something, you get 0 items correct.1742

It does have even intervals so when you are counting like how many questions correct and you know that 30 is better than 20 is better than 10.1752

It has order.1767

I would say that this is a ratio scale.1770

Just because it is a ratio scale does not mean that it actually measures basic math knowledge in a precise way.1774

After all someone who has a 0 on this test, it may not be that they do not know anything right so 1784

how it actually matches that to the variable is still up for grabs as the question but in terms of the measurement scale it might be a ratio scale. 1792

There is one way that it could not be a ratio scale and that is if the questions are differing levels of difficulty 1802

so there are difficult questions and not difficult levels of questions, that could screw us up.1814

Let us just assume right now that all the items are sort of roughly similar levels of difficulty, if so then I would go with ratio scale.1823

Example 4, if the active measurement is disregarded which of the following variables are fundamentally discrete and which are continuous?1834

Temperature is probably continuous because you could be a little bit hotter, a little bit more hotter, a little bit hotter than that.1845

Every kind of value can we have on that scale, no gaps.1855

Time elapsed, this is also continuous because you could have every small increment of time accounted for.1864

In gender I would say this is discrete because there is not every single kind of variation in between.1874

Brands of orange juice, I would also say discrete this actually sounds nominal.1886

Size of family, this is also something that is discrete, again it is hard to have 2.75 people in the family.1894

Merit rating of employees so how much merit does an employee deserve?1904

Fundamentally that is continuous, one employee could be just a little bit better or worse than another employee.1909

They could be very close.1916

In the same way achievement score in mathematics that could also be continuous 1918

because somebody might be able to achieve just a little bit more in math than someone else.1923

That is example 4, thanks for watching www.educator.com.1931

Hi and welcome to www.educator.com.0000

We are going to be doing a short lesson introducing you to Excel.0002

If you already worked with Excel before please feel free to move on.0007

Before we get to visualizing distributions in Excel, we just want to give you a little overview.0013

Excels are nice handy spreadsheet program.0018

It is pretty easy to use, most computers have it and it is useful because a lot of companies and laboratories use Excel.0021

It is a nice real life skill to have.0029

Another thing about Excel is that it is a good short intro to programming.0032

It can handle iterative computations, computations that you have to do over and over again and small calculations in bulk.0036

Here is how Excel is organized, it is based on workbooks.0047

Think of a file as a workbook, it is a series of what we call sheets.0052

Each file when you save an Excel file is a collection called the workbook.0056

Just to show you on a real Excel workbook, notice how it says workbook up there.0065

When you save this file and I hit save here, this whole file is going to save several sheets and the sheets are listed down here.0070

Now we only have one sheet but here I'm going to add on another sheet.0079

We have sheet 1 and 2.0084

You can have 4 or 5, all kinds of different sheet.0086

You can also rename these sheets to whatever you want.0090

We could call this one data.0093

And there you go, that is our sheets.0097

It is a little bit small here and let me try to, it still ends up being small but hopefully you could see that in the corner of your screen.0099

In each worksheet you are going to see columns and rows.0112

Columns are going to be shown to you and indexed by a letter.0116

Columns are always letters like ABCD.0122

The rows on the other hand are always going to be indexed by numbers like 12345.0126

Each cell or square has a name that you can index by saying the column name and the row name.0132

Something like A1, B5, these are all cell names.0140

Each cell can accept a number, text, or formula.0146

We will get into what those are.0150

Just to show you again in Excel here my columns indexed by letters like A, B, C, D.0153

Here my rows indexed by 1, 2, 3, 4, 5.0163

And each cell has A, B, C, D and 1, 2, 3, 4, 5.0164

If you click on this cell, this cell is B2.0174

Let us talk a little bit about the tools.0183

The toolbar in Excel usually have a menu bar which is sort of your standard Microsoft suite toolbar.0186

It also usually has a toolbar for things like formatting your words and letters, fonts, colors, whether you want things to be centered or not.0195

Those things are pretty basic.0206

It usually has a formula bar.0208

This is new to Excel and different from all the other Microsoft suite programs.0210

In order to let Excel know that you want to type in a formula, you start the formula with an equal sign (=).0217

Just to show you that on Excel, here I could write down A, B, C or I could write down a number.0224

Here we have your standard toolbar for things like hey I want to save it or I want to print it.0237

But then you would probably also have something like a formatting palette 0246

to help you figure out what font you want to make it out, this 10 to be a red.0250

Do I want my 10 spaced in the middle or aligned to the left?0259

You can also make this 10 facing in different directions that we could turn it orthogonally.0269

Let me turn it back so that we will get to use it again.0277

If I want to write a formula I would just start by writing an equal sign (=).0280

A formula can take lots of things and we are going to into what some of those things mean.0286

One of the things that can do in a formula is I can reference another cell.0290

Let us say I want the cell to have whatever is in this cell B2.0295

If I click on B2 then this formula says this cell is going to be equal to whatever is in B2.0300

Click enter and it should have the same thing that was in B2.0307

I could change B2 like I could make that 100 and that is going to change this one immediately because it is just a formula.0311

It is just pointing to this cell and saying whatever it is in it, take that on as well.0321

Some of you may have a separate formula bar or you might by double-clicking in it be able to see what sort of written in here.0328

We will probably show it to you with the formula just type inside the cell but once again if you want to use the formula bar that is not a problem.0337

That is basically it for Excel organization, now we will go on how to reconcile Excel with the data organization 0356

that we learned about in statistics so far.0365

Excel plus data, in Excel we know that a file is called a worksheet.0370

In statistics language that is where we are going to put our data.0378

Each row in Excel is referenced by numbers.0381

Each row in data is going to represent a case.0385

Whatever object we are interested in studying or analyzing.0392

In Excel the columns are going to be referenced by letters and these columns are going to represent variables in our data.0398

Each cell reference by a number and a letter step together like A1, that is going to take on a value.0407

One of our values goes into our variables.0416

That is how Excel and data come together hopefully you have learned a little something from the short intervention.0423

Do not worry if Excel is still a little bit new to you, you will get used to it at the end of this lesson.0430

Thanks for using www.educator.com.0436

Hi welcome to www.educator.com.0000

We are going to be talking about how to create frequency distributions in Excel from raw data.0003

We are just going to overview when sample data set in Excel already, you can download it from one of the links below.0012

When we are going to talk about how to create frequency distributions from that data 0022

but in order to create these distributions visualize a bowl of seeable distributions.0027

We need to go first from the data to frequency tables, then from the tables we will go to the visualizations.0034

First, going from raw data to frequency tables.0046

The reason we want to do this is oftentimes when we look at raw data it is really hard to make sense of.0050

It is just rows and rows and rows of data.0055

It would be nice if somebody could summarize that data for us so that we can visualize it.0059

When we summarize and visualize that data we get a sense of what the data looks like.0066

We are going to be talking later about actual shapes of distributions.0071

There are two ways to go and do frequency tables in Excel.0076

One is by using formulas.0080

Here we are going to be using the formula count F and the other way is to use pivot tables.0083

I’m going to show you one example of using pivot tables but we are going to be using mostly the formulas.0090

If you want to open up your Excel file that has all of our data in it, this is a sample data set of 100 friends from www.facebook.com.0100

Notice that they all have this CID which is their case ID and each column shows some sort of characteristic or variable.0110

Each cell for each person has a value for that variable.0124

Let us look at example 1, CID 1, case number1.0131

For this person they have 4 tagged photos, not a lot of tagged photos.0137

They have to seem 0 mobile uploads, again not a lot of mobile uploads, maybe they do Not have a smart phone right?0143

If we go down the line we could see that there are lots and lots and lots of variables here.0150

There are tagged photos, mobile photos, uploaded photos, profile pictures, then number of friends, number of siblings, relationship status right?0154

There is a whole bunch of these.0167

Here is one that we are going to be focusing on today, birth month.0169

Birth month is going to be important for us today.0172

We are going to be looking at age and height.0176

If I asked you if you see these 100 people and I will show them to you all at once so you could see them.0184

Here is this 100 people what can you tell me about their age.0193

What can you tell me about their height?0197

It will be hard to do because it is just lines and lines and lines of data.0199

It will be nice if there was one way where we could just easily see all the data at once in a way where it was a little more tangible to us.0204

That is where we are going to be talking about how to visualize these and how to create frequency tables.0216

In the files that I provided for you, I put in little tabs already.0219

One of the sheets has all of our data in it and one of the sheets talks about the variables.0225

Here we have a whole bunch of different variable names like the case ID number, the tagged photos, 0235

how many photos they are tagged in, mobile uploads, how many mobile photos uploaded, relationship status, birth month, birth year, gender.0239

These are a whole bunch of different variables that are already in this data set.0249

I also have a column that tells you what kind of measure it is.0254

Is it a nominal measure where it is just a number but it really stands for a name?0259

Relationship status is one of those where there is a number there like 1, 2, 3 or 4 but it does not mean 0264

that the relationship status is literally like the number 1, it actually means if you scroll over, if they have a zero it means that their 0440.9 relationship status is blank.0271

If they have a 1 it means that their single.0283

If they have 2, that means there in a relationship.0286

If it is 3, they are engaged.0289

If it is a 4, they are married and if it is a 5, it is complicated.0291

And 6 if it is other right?0295

That is an example of what we call a nominal type of measure.0297

Just so you can see all of these things at the same time, if you look down here there is this two little blue rectangles.0302

If you drag that over then you could sort of keep this column just static and locked while you move these columns.0311

We can also see that birth month is what we call an interval, it can also be seen as ordinal.0324

It is not quite interval because it is technically like 30 or 31 days, it is not exactly the same interval but you could sometimes call it interval.0332

Each of the numbers represent one of the months.0344

Birth year is also interval, there is an interval of exactly one year.0350

Gender is obviously nominal because even though there is a 1 or 2 it does not mean that their gender is 1 or 2.0354

It means that if they have a 1 they are male.0362

If they have a 2, they are female.0364

Some things like friends is really to understand though because friends is a ratio measure.0366

It is the count of how many friends they have so that is continuous type of variable and if they have a 0 means they have no friends.0372

That is very rare on www.facebook.com but it could happen.0382

I’m going to move this locked piece over.0387

The next tab you could see there it says birth month on it.0391

So far I have created a little set up so that we could begin our frequency table.0397

A frequency table is just a count of how many people are born in January.0402

How many people are born in February and so on and so forth. 0408

Now if we have to do that by hand it would be hard.0412

We have to go to our data, click on data.0414

Go to birth month and we have to count up how many people have one, 1, 2.0418

But this is a very error prone process so we are going to use Excel to help us do that really efficiently.0426

First, let us go to our first example.0436

We have here a data set with data from 100 www.facebook.com friends.0440

More of these friends born in a particular month or is the number of births fairly uniform across the year.0444

Well is there reason to believe that one month is more popular for having babies than another month?0452

We are not sure but it is hard to see the answer to this question literally like see the answer to this question 0458

by looking at the data because the data just look like this giant list.0464

That is why we are going to create frequency tables.0470

In order to create frequency tables we can start off with the formula.0474

In order to do a formula remember we always start off with the equal sign (=) to tell Excel “hey I’m doing a formula here”.0479

In order to count how many ones we have we could use the count formula.0486

It is a formula that is already prewritten in Excel.0493

Excel will just do it for us.0496

If we just stopped at the word count, it would just count how many things you have.0498

It would not count how many ones you have, right.0504

We want to use the formula count if, that is the function that we want to use0508

What is handy about Excel is that once you type in something then it will tell you what inputs you need.0514

Here it says you need the range.0521

The range of cells that you want Excel to look at as well as the criteria.0523

Here I’m going to tell Excel we will look over at my data.0529

I’m going to click on data and click from this one all the way down to the very very last row. 0535

And if I go back to birth month then it should say date from row I2 all the way to I101 but it has it twice, I’m going to delete this part.0547

That is the data that I wanted to look at.0567

This little column right here is telling you the range.0570

It says go from I2 all the way to I101.0574

That is the criteria I want and before I put my criteria Excel tells me, it reminds me I need a little comma in between.0579

I’m going to put a little comma.0589

What is my criteria? I wanted to count it if it is a 1.0591

I’m going to say if is equal to whatever is in this cell.0598

Excel will automatically put in that this is part of the birth month sheet.0602

It actually does not need this one either but it will put it in automatically for you.0611

I’m going to delete that one just so you could see but you could have it there as well.0616

It does not matter.0620

Let me finish my little function and let us look at what it says.0623

It says count if the data in this range is basically equal to whatever is in a2, this one.0626

Let me hit enter and it should say 7.0636

7 people out of my 100 www.facebook.com friends are born in the month of January.0639

The great thing about Excel is that it is a relative program.0645

If I copy and paste this cell, one cell down it will take everything in my formula and sort of calibrate it one cell down, right.0649

Let me look at this, do I wanted to bring everything one row down?0665

That means my data would go from I3 to I102.0671

That is not what I want.0677

I want the data part to stay the same but I want this part to move and moved down.0678

So that then it will say count if this data is equal to 2.0684

Here is what I’m going to do, to tell Excel keep this part the same.0691

I’m going to tell I’m going to put in a dollar sign ($) right in front of the I and right in front of 2.0695

This says freeze the row and freeze the column.0702

I’m going to put that also in front of this one, as well as that one.0706

That means this data set will never move but this A2 will move.0712

Notice that doing that does not change anything from my first row but I’m going to take this and copy it.0718

I’m just hitting either command c if you are on a mac and control c if you are on a pc and then pasting it one cell underneath.0724

Let us double click on this to see what it says.0736

It says count if data and my data states exactly the same from I2 to I101.0739

That is exactly what I wanted to do.0745

Notice that now my criteria has changed.0747

My criteria has moved one row down because I have copied and paste in my formula, one row down.0750

Excel it is relative.0757

It will move everything one row down.0759

Let us try it with the next one.0762

I’m just copying and pasting this one, one row down.0764

Let us double click on it to see what it says.0769

It says count if.0771

Data stays exactly the same from row 2 to 101 but now it is comparing it to whatever is in A4 which is March.0774

The nice thing about Excel is that if you look right at the corner here, there is this little box in the lower right hand corner.0785

If you put your mouse over that it will turn into a little cross.0794

If I drag that all the way down, it will copy and paste my formula again and again all the way down.0800

We could just check one of these down here once again my data set has stayed the same because I put those dollar signs ($) in there.0807

My criteria has moved down to A10 now.0816

I have my frequency table now.0820

Frequency tables are nice because they just give you the raw numbers in the month of January there are 7 people who have birthdays then.0824

In the month of July there are 10 people who have birthdays then.0833

We could look at our data.0837

We could stop here but I want to show you another way that we could create frequency tables.0839

I’m going to go back on my data and show you a second way.0848

The second way is less common but I still want to show it to you because we may use it once in a while.0853

We are going to use what is called pivot tables.0857

What I’m going to do is just put my cursor anywhere and open my Excel toolbar.0862

Unfortunately, you cannot see it on this screen.0870

Open my Excel toolbar.0872

There is a little tab called data.0873

Seldom used.0877

If you scroll down there should be something that says pivot table or pivot table report.0880

I’m going to click on that.0888

Once that comes up, you should have a little pivot table wizard that pops up and you will say “where is this data you want to analyze?”0893

It is on my Microsoft Excel data base.0903

Is this the data you want to use?0907

Yes, I want from A all the way to N and from A1 all the way to 101.0908

That is next.0924

I want to put my pivot table on a new sheet, just so I can show you.0925

I’m just going to hit finish.0930

A new sheet should pop up, it is probably be called sheet 1.0934

I’m just going to make this a little bigger for you.0939

A little pivot table should pop up.0945

You should also have a little pivot table tool bar that also pops up.0949

Let me drag it in for you.0955

Here we go.0966

This is the little pivot table tool bar that comes up.0967

This pivot table tool bar has all of my variables in it.0970

I could drag these variables into this pivot table down here.0975

It actually shows why it is called a pivot table.0979

I assume it is because you could move these variables from one corner to another and that is where we get the pivot.0982

What we want is a bunch of months on this side and then I want it to tell me how many people are born in that month.0990

I’m going to look for birth month and put it in my row fields because each row is going to be a birth month.1000

I’m going to take that birth month and drag it into my data as well.1006

What is does is it sums up how many of those birth months there are.1012

For January it sums up 1, seven times but for 2 I do not want it to sum up.1018

Instead I’m going to tell my pivot table count how many they are, do not sum them up.1026

Go to pivot table and go to field settings and I will hit count instead of sum.1032

Then hit Ok.1039

When that happens you can see we basically get the same numbers that we have when we use the formula.1040

In the month of January we have 7.1045

In the month of July we have 10.1048

This is another way that you could look for frequency tables.1050

Notice that this one is pretty fast.1056

Pivot tables do require a little bit of work but on the front in there is a little bit of learning curve.1059

Once you do understand that, they are really handy.1066

We maybe using them again in the future.1068

If you do not feel comfortable with them, feel free to also use the formulas.1072

I will be using the formulas for the rest of this lesson.1076

Let us go back to my birth month.1079

My birth month pivot table created just through Excel formulas by themselves.1083

I have this nice frequency table but it will be nice if I could visualize it.1089

Here I have to read each row and although for 12 months it is not so bad, they might be times when this is less helpful to us.1097

What I’m going to do is highlight the data that I want to visualize and then hit chart.1105

It should be one of the tabs up here or you could go get it through one of your Excel tabs.1115

I’m going to say give it to me in columns or you could use borrow as well.1122

In Excel it just means it is on this side.1133

I’m going to use columns for now.1135

I will just pick the first one.1138

It seems the simplest.1140

I’m just going to delete that legend, it is redundant.1144

Here is my frequency table and we could literally see our data.1149

It is also tells me what each of these bars stand for.1157

It stands for 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.1162

What Excel will do is it will automatically seed your X axis with just those numbers starting from 1 and it will go up.1166

With months, handle it but is the same thing that Excel is doing.1174

In another example we will need to put in our own X axis.1179

Notice that here these are not means, they are not averages, these are frequencies.1185

This means that 7 people were born in January, 10 people are born in July and 7 people are born in December.1191

And so that is what our birth month frequency visualization looks like.1205

This is our frequency distribution for birth month.1210

Let me minimize this.1215

If the number of friends born in a particular, is one month particularly popular for one of our friends are born.1217

It does not seem to be the case, the months all tend to be from something like 7 – 10 people per month.1225

It seems that the numbers are fairly uniform.1233

Let us go into our second example.1241

Here is another example and now let us take our same data, the data from 100 www.facebook.com friends 1243

and we are going to look at what is the age distribution in this sample.1249

Here is my Excel data I'm just going to click on the data sheet and here when we go up and look for age we could see here is a whole bunch of ages.1255

It seems like there is a lot of people in their 20’s.1268

A few people in their late teens but here we see some people who are 0 years old.1274

In this data set, if they have 0 it means that they do not list their year of birth or do they do not list their age.1281

Maybe they are embarrassed, maybe they are too young.1289

I do not know.1291

We do not learn a lot by just scrolling up and down on this data.1296

That is why it will be nice if we could look at a frequency table or look at a distribution visually.1300

I'm going to click on my age sheets and here I have already made set it up so that we could just 1306

do our frequency table really easily from the lowest age in our sample which is 17 I have ignored the 0 obviously 1314

to the oldest age in our sample which is 38 and there is all the ages in between.1323

Let us go ahead and put in our formula to find out how many people in our sample are 17 years old.1329

To start a formula we start with the equal sign (=).1335

We use count if because we do not want to count everybody, we just want to count the people who are 17.1338

Let us tell Excel where it should find our data, what is the range of data.1346

I’m going to click on data and click from this cell all the way down to row 101.1352

I know I need a comma after that.1364

I’m going to delete that part.1371

Here is our data range and I wanted to count it if this person is 17.1373

My inputs are there. Remember we want this data to stay the same all the time.1385

We do not want it to move because Excel will move it if it has the chance to.1389

I’m going to put dollar sign ($) in front of the L and the 2 and in front of the column indicator 1394

and the row indicator to tell it to lock this data in place.1400

Always use this data, do not change.1404

Once I have that formula, I'm just going to drag it all the way down so that it counts at the frequencies for 18, 19, 20 year olds.1409

Let us back check. Let us look at 21 year olds.1424

It says count if our data set has stayed the same because we have locked it in with our dollar sign ($).1426

Now it is saying I will count these people if they are 21 years old, that is our criteria.1433

It looks like our formula has copied and pasted quite well.1438

Notice that for some of these some of ages, the frequency is 0.1443

There are 0 people who are 26 years old in our sample.1449

Now why do I want to keep that 26 in there?1454

If we skipped down on 26 and 28, 29, 30, 31, 32 and we looked and there is 127 year old in 133 year old, 1457

we might mistakenly assume that from 27 to 33 there is equal chance of having 1471

at least one person from our sample being sort of in that range.1478

You could see that is actually not true.1483

In between there, there is like a big desert of nobody and we want our distribution to reflect that.1486

Age is a continuous variable and so we do not want to skip any ages.1494

We want to show how the distribution looks as we look at age continuously.1500

This is nice because we can already see that the ages are clamped or clustered around age maybe 20 – 22, early 20’s.1507

It will be nice if we could really look at this.1518

One thing you might want to do is click on select both age and frequency.1520

Go to charts and we are going to do an X, Y scatter.1530

For those of you who have Microsoft Excel later than 2008, like 2009 and later you can go directly to column 1538

but here we are going to start with 2008 Excel.1549

We are going to need to do a little fix.1555

First I’m going to click on a scatter.1557

A scatter is nice because it shows you both the age.1560

This is age 17 and the frequency.1566

Once we have that then I'm going to go to column and then it will show me 17 through 38.1572

If I had gone directly to column, here is what will happen.1584

If I did not go through scatter first, here is what will happen.1589

Let us say I just wanted the frequency, they will go directly to column it will not give me the proper ages on my X axis, 1595

it will only give me Excel’s default setting for the X axis which is just labeling it from 1 all the way to 22.1604

However many there are that is not what we want.1614

Instead we would rather have Excel label the correct ages for us.1618

Just so that we will know that this is a frequency distribution of ages later.1625

We should go and label are horizontal X axis, we can label that age.1632

In that way we will know it is a frequency table but it is a frequency table of ages that is what the 17 stands for.1641

What is the age distribution in the sample is largely young.1654

They are mostly on the young side with a few people sort of in their 30’s.1658

Example 3, again from our same www.facebook.com data, what is the height distribution in this data?1666

What did their heights looked like.1673

Let us see.1675

If we click on data and we look at their heights, their heights are listed in inches.1679

Remember that 5 × 12 is 60, 60 inches is about 5 feet tall and then 68 is 5’8.1685

It is a quick way to think of it.1696

72 is 6 feet tall, that person is pretty taller.1698

Once again if we just look at these row by row, it is a just bunch of numbers.1703

We do not need that, we would rather have a nice frequency table.1709

Let us go to height.1713

I have already seated it for you with the height that is the minimum height in our data set as well as the maximum height in our data set.1716

The minimum height happens to be a little bit just shy of 5 feet, 4’10.1724

This one is a little bit more than 6 feet tall, 6’3.1734

Let us put in our frequency function.1740

Count if and let us go ahead and select the data that we want to use.1746

Now that we know we basically need to lock it in place, let us do that right here.1759

Let us lock it in place.1766

We already locked our data in and what is our criteria?1774

I want you to count it if they are 58 inches tall.1779

It seems that there is only one person in our data set of 100 that has that height.1787

I’m just going to copy and paste that all the way down.1792

Once again I'm just going to spot check, 69 inches tall count if this is the correct data.1796

It is locked in and this is the correct criteria that I wanted to use for that row.1806

Good.1811

When we look at this, it seems that it is not that there is one cluster.1814

It seems like there is this sort of giant spread out cluster.1818

It will be nice if we can look at this visually.1825

Let us go ahead and select both columns.1829

Go to chart and go ahead and select XY scatter.1833

This is going to give us both, it is going to use the height as the x coordinate and the frequency as the y coordinate.1839

Here we see that all our frequencies are up here because all of our heights are from 58 to 75 inches.1850

Let us change that into a column chart.1862

Here is how our distribution looks like.1870

Just in case we come back to this later it will be nice to know what these numbers down here represent.1874

I'm going to go to my formatting palette, I’m going to close that.1879

I’m going to go to my formatting palette and tell my horizontal axis that it should be labeled height in inches.1884

That is what our distribution of heights looks like.1902

It looks like these over here, this one seems pretty popular and these seem sort of popular.1907

These are less likely and this one a little bit less likely.1915

This is a sort of what our shape looks like and it is nice and it is really easy to see when we see it in a visualization.1921

It is harder to see when we just look at the list of numbers.1928

Let us move on to our next example.1934

Example 4, now that is the height distribution of everybody in our 100 person www.facebook.com example.1940

But it is a mix of males and females.1948

What if we just wanted the height distribution of males?1951

After all males tend to be taller than females.1954

Their distributions might look different.1956

Let us look at the height distribution only of males.1958

We could also look at only the height distribution of females.1962

Feel free to do that if you want.1965

Here I'm going to use my height by gender and there is a male frequency column and a female frequency column.1970

Once again here are my heights but we will have to figure out in our data set which rows belong to males and which rows belongs to females.1982

Let us go back to our data set.1993

Here is my column for gender, my variable of gender.1998

Some people are gender number 2 and some people are gender number 1.2003

If we look at our variables we could see that gender has been dummy coded because it is a nominal measure.2008

We will get 0 if gender is blank or unavailable.2019

They got 1 if their gender was male and 2 if their gender was female.2024

Here is what we will do, we will take all of our data and sort it by gender 2030

so that all the 1 are clumped together and all the 2 are clumped together.2035

I'm going to use sort.2041

Sorry about that.2054

I think I did it and ended it, alright.2059

I’m going to use gender and I’m just going to sort it by clicking in this column.2060

I just want to make sure that these guys all moved with each other.2070

Now it is sorted so that all of my data for males is up on top and then all of my data for females is at the bottom.2077

Just to keep it straight for myself, I’m going to just color all the heights of males, all the values for height of males,2088

I'm going to color that with the blue font color.2098

Just to help myself keep it straight I’m going to color all the females height values with the sort of pinkish font color.2106

What does my distribution of only males look like?2119

We need to start off with the frequency table again.2123

Let us go to height by gender and here I will put in count if.2126

And let us put in my range.2136

Now my range is only going to be those that I have already colored blue 2138

because they only want my range to be those that are already identified as males.2143

Here I’m going to select all these blue guys and put a comma.2150

And then tell if a male is 58 inches tall then I definitely want you to count him.2164

It turns out there are 0 males that are that tall or that short for that matter.2178

We want to lock that data set in place because we know that this is not going to need to move for this column at least.2184

I’m going to go ahead and copy and paste that all the way down and we see that 2195

from the males the heights are sort of clustered up here rather than down here.2200

I wonder if that is the same for females.2208

Even though our question was really about males why do not we females too just to see.2210

I’m going to start with my count if.2217

The range for females needs to be all the data that has been already identified as females.2221

Here are these pink women and I’m going to go ahead and put in a comma because I know I will need one.2227

Go back here and I will say check if the female is that height.2235

Once again I want to lock in my data.2243

I do not want that to move when I copy and paste.2248

And then it turns out that our one person who is 58 inches tall before happened to be female and I’m going to drag that all the way down.2253

We see something different in females than we saw in males.2263

Females tend to be clustered around here and the most frequent height being about 64 inches.2267

For males, the heights are sort of clustered up here with the most frequent height being 69 inches.2275

Let us look at this now and visualization.2283

I’m just going to look at the heights of males for now.2288

Hit chart and go to XY scatter because I want to know both the height and the frequency of that height.2293

We see that males are clustered up here.2305

Let me change that into a column and what do we see?2309

We see that it is like a pile.2318

The males are sort of piled up around 68 – 70 and it falls off closer to 5 feet tall.2321

There is not as many people who are way taller than 6 feet.2330

That is the chart for males.2337

Feel free to go ahead and do the chart for females.2340

That is the end of our examples today.2346

Thanks for using www.educator.com.2348

Hi and welcome back to www.educator.com.0000

We are going to be talking about frequency distributions again but now we are going to be going a little more into detail about their features.0003

In the last lesson we covered how to look at the data in Excel.0013

There is a checkmark on top of that one and we talked about how to go from data to frequency tables using our count if function. 0017

From frequency table to visualization.0026

We are going to take another look at those same examples that we looked 0029

at before except now we are going to be talking about the features of these distributions.0033

In particular we are going to be looking at their shape.0037

There are couple of shapes you should know after this.0040

One is uniform distributions.0043

Another one is going to be called unimodal.0047

Yet another is called bimodal.0054

Especially we are going to be looking at called normal.0059

We are also going to be talking about center.0065

We are not going to be talking about how to calculate the center of the distribution.0068

We are going to be talking about how to think about the center conceptually in three different ways, mean, median and mode.0072

We are not going to talk about how to calculate it yet.0080

We are also going to be talking a little bit about spread.0084

How spread out is this distribution.0086

Finally we will also mention outliers, gaps and clusters whenever they are relevant.0090

Recall example 1, here we looked at a data set of a 100 www.facebook.com friends 0098

and we looked at whether more of these friends are born in a particular month or another.0103

Note here that it really seems to be that no particular month is super popular.0108

This is what we call the uniform distribution.0113

If you sort of squint and blur your vision a little bit, it is almost like there is a flat line here.0116

Everybody is hovering close to that line.0125

No one month is more frequent in births than any of the other months by a lot.0130

Some of these months are a little more frequent but only by a little bit.0136

You could see there is relatively little change from month to month here.0143

Other uniform distributions also look like this sort of rectangle or flat shape 0147

and these distributions might be anything from deaths occurring on days of the week.0155

Is there any reason to believe that one particular day is more favorable to die on than the other?0160

Or in rolls of a six sided dice, is there a particular reason to believe that one side might come up more frequently than another?0165

Not if it is a fair sided dice.0174

Remember this is now example 2, in example 2 we look at the same data set again and we looked at the age distribution in the sample.0181

Here we do not have a uniform distribution.0191

No matter how much you squint your eyes you are not going to see sort of a flat shape.0193

You will see a peak right here and because of that, this peak often called a mode the most frequent value.0197

This peak makes this a unimodal distribution.0208

I’m not going to call it example 2 anymore, I’m going to call it a unimodal distribution.0213

We will add on to that.0220

Not only that but this shape is what we call skewed.0222

If I decide to just draw a light little sketch over this guy, we see that it has this long what we call tail.0226

This tail goes out towards the right side, the larger values because it is skewed and the tail is to the right we call is skewed right.0242

It is not only unimodal but it is also skewed right.0256

You often have a skewed distribution when you have some sort of minimum or maximum value that these values are all bumping up against.0263

In www.facebook.com I think you have to be 13 years old to sign up and maybe a lot of 14 and 15 year olds.0273

Their parents are not letting them sign up.0279

The bottom end of it there is sort of a walls and there is like an imaginary wall there.0282

The most popular at least in our sample seems to be in the 20’s and some of the older people use it.0290

There is no limit on that.0296

You could be 100 years old and still use www.facebook.com.0297

Since there is no limit on that, that tail can go on for a really long time.0300

These outliers out here, you could think of them as oddballs but we call them outliers.0306

Tails are often made up of outliers.0320

Note also that because this is skewed right, if we drew a line of symmetry from the mode and 0324

we imagine folding this distribution on itself, we would not have two sides that match up.0333

We call this asymmetric as well.0342

We learned a lot here, it is unimodal, it is skewed and it is asymmetric.0346

Here we will learn yet another term, we see there are these gaps.0356

These are called gaps, nice and easy.0365

If we had a couple of people clustered in a group we call that a cluster.0369

A lot of these terms are pretty normal words that you use in everyday life.0377

Let us move on to example 3.0391

In example 3, we are interested in what the height distribution was in this sample.0393

Compare this distribution to our previous skewed distribution.0401

Is it skewed to the right or skewed to the left?0408

Is there some sort of tail here?0411

Not really.0414

There is no real tail that I can see but we do see that there are a couple of places that are popular modes, most frequent values.0416

These are 64 and 69, these seem to be the popular peaks.0428

Because we have one mode here and another mode here this is no longer a unimodal distribution.0436

This is what we call a bimodal distribution.0443

Instead of calling it example 3 I’m going to call it bimodal.0447

Is it symmetric? We could see it as almost having 2 bumps like that.0453

It is sort of symmetric but not perfectly symmetric.0464

There is no tail, there is not very many gaps.0469

There is maybe a little bit of a gap here but not very much.0476

This is what we call a bimodal distribution.0480

Let us think about this, height distributions.0483

Well our www.facebook.com friends are both males and females.0488

and since males tend to be taller than females on average it might be that there is a cluster of males up here 0492

and a cluster of females down here that we cannot see right now.0499

Let us look at these two distributions, males and females separately.0504

Here is just the distribution of male heights from our sample.0511

Notice that here it is not really a symmetric because when you look at this mode, there is our mode right here and you draw a line of symmetry.0517

You imagine folding it on itself then you will get a pretty even looking hill right there.0531

You will get a pretty even looking hill with roughly similar numbers of people on this side as on this side.0547

This is what we would call a roughly symmetric distribution instead of example 4a.0558

It is also what we call unimodal because we only have one mode right here.0568

What else do we notice about this?0582

We do not really see a tail and further more it seems that this distribution seems to have a lot of people piled up around 69 inches.0584

With a lot more people close to 69 and fewer people farther away from 69 like at 75 or around 64.0596

This is what we call a normal distribution.0610

You could think of a normal distribution like a pile.0617

A normal distribution will not usually, by definition a normal distribution is both unimodal and symmetrical.0620

In a normal distribution typically the mode, as well as the mean or the average is going to be the same.0632

To think about the word average you might want to think of it like this in terms of distributions.0648

Imagine cutting out this distributions, like out of cardboard and then trying to balance it on your finger.0653

Where the distribution would balance, that point, that is the mean.0662

Although we will learn to calculate this later, that is the image I want you to think of when you think of the mean.0668

If we draw a smooth line around this distribution on either side of the mode, at about 60% of the height of this peak that is about 50%.0675

Around here at about 60% of the height of this peak you will have what is called the point of inflection.0692

Here is what so important about the point of inflection.0709

Although you cannot see it very well from my picture, I will exaggerate it.0713

The point of inflection is where the distribution goes from being concave to being convex.0716

That is about right and this point of inflection is going to be important later because this distance right here, 0726

this distance is going to be called the standard deviation.0736

Later we will learn exactly how to calculate that but that point of inflection and the standard deviation 0752

is going to be really critical to our understanding of other distributions as well.0758

Here we see both males and females heights plotted on this frequency distribution.0765

Just you could see here is a sort of our female distribution and here is our male distribution.0777

Here you could see there is roughly a normal distribution for the females as well as the males.0792

We are going to say two normal distributions.0799

They are both unimodal and they both are roughly symmetric on both sides.0806

There is no tail.0814

No big gaps.0816

There is a big cluster in the middle and that is about it.0817

Here what we thought before was a bimodal distribution we actually see there is actually two normal unimodal distributions instead.0823

Let us summarize what we have learned so far. 0839

We have learned four different shapes, uniform, skewed to the right or to the left, bimodal and normal.0843

We have also learn asymmetric and symmetric.0850

Here I’m asking is this one symmetric or this one asymmetric?0853

The uniform one, yes it is largely symmetric because rectangles are symmetric.0858

Skewed, are they symmetric?0863

No, because either the right tail was long or the left tail is long.0866

Bimodal, are they symmetric?0878

This one is sort of a sometimes.0881

There can be times when the these are symmetric.0883

For instance if you have two that look like that, it is roughly symmetric but you may also have bimodal distributions that look like this.0886

Then that one does not look as symmetric.0897

Normal distributions, yes they are symmetric, always.0901

I will just draw this here just so that you know.0907

Let us talk about the centers.0910

Does it have a clear mode?0913

Here it does not have a mode, there is not one most frequent value.0917

In fact all the values are roughly similarly frequent.0923

We will say no, it does not have a clear mode.0926

Typically the skewed distributions are unimodal.0929

Yes, unimodal.0933

What about bimodal distributions?0940

Do they have a mode?0942

Yes, they are overflowing with modes.0944

They have two modes in fact sometimes more.0944

You could have trimodal right? Yes.0949

What about normal distributions.0954

Well of course it has a mode because it is also unimodal.0956

Let us talk about spread.0966

What a spread look like here.0968

The spread is roughly even but as it goes as far as the values go.0970

Does it use the point of inflection? No.0976

What about in a skewed distribution?0980

Do we use point of inflection there?0982

In a skewed distribution the point of inflection is weird because the point of inflection is going to cut it up 0985

at different places depending on whether you look at the right side of the mode or the left side of the mode.0990

Point of inflection is not quite as useful here.0995

In a bimodal distribution sometimes you can use the point of inflection but it gets complicated.1000

We will write in it is complicated.1005

It is only for the normal distribution that the point of inflection comes in really handy.1014

Resulting yes.1019

At the distance from the mode or the center to that point of inflection, distance is called the standard deviation.1021

Let us go on to some examples that you might frequently see in text books, AP statistics, as well as a lot of general reasoning questions.1042

These are what I like to call sketch problems.1056

They will give you some sort of data set that you only know a little bit about and they ask you what kind of distribution do you think it might have.1059

We can answer these questions now.1069

Here is sketch problem number 1.1071

What if you are asked to imagine the age of each person who got his or her first drivers license in your state last year?1074

That is going to be a distribution.1081

It is a whole bunch of numbers, whole bunch of different ages.1083

Let us think about this.1088

On the X axis we will probably put age.1091

Here we are going to put frequency but I'm not going to try that in.1094

Actually I will try that in.1106

Here is the Y axis.1108

Let us think about this.1110

Is there some sort of minimum or maximum age at which you can get your drivers license?1112

Yes, probably 16 in most states.1116

We will put 16 as the minimum age and probably a lot of people get their drivers license sort of early on, from 16 to 20.1122

They are probably very few people getting their first drivers license ever by the time they are 25, 30, or 40, even fewer people.1131

That is already starting to sound like maybe somewhat of a skewed distribution.1143

Probably lots of people in their early 20’s, maybe late teens, getting their drivers license 1149

and very few outliers were getting their first drivers license when they are 40 or 50.1158

Even though you might not know very much about people getting their first drivers license you can already tell the shape of this distribution.1168

It is skewed but not only skewed but the tail is to the right.1178

We call that skewed right, it is probably unimodal or there is probably some cluster up here.1182

It is probably asymmetric because it is skewed.1191

Next example, here is sketch problem number 2.1200

Let us think about the life expectancy of females in Africa and Europe.1205

When we think about life expectancy that is considering how long are females in Africa and Europe going to live.1211

Age or years should probably be on the X axis.1217

On the Y axis once again we are going to be looking at frequency.1224

I will just say freq.1228

Let us consider the life expectancy in Africa and Europe.1232

Africa has a lot of diseases and malnutrition and other factors that are going to affect life expectancy of females.1236

Also Europe on the other side of the spectrum is going to have a lot fewer of those same issues.1245

The life expectancy of males in Africa might be shorter than life expectancy of those in Europe.1255

We might see something like a bimodal distribution that is actually caused by two unimodal distributions.1261

Let us put Africa in red and European females maybe in blue.1269

Maybe most European females die when they are older, like 70.1281

Maybe in Africa the life expectancy is less, maybe 50.1288

Here we see two unimodal distributions but it did not ask us to plot this separately.1298

When we combine these, we see a bimodal distribution.1305

Let us go into the next problem.1319

Sketch problem number 3 says well what about the distribution of the last two digits of the telephone numbers in the town or city where you live?1323

Do we have any reason to believe that those two digits are going to be more favorite than the others?1332

Let us think the last two digits of the telephone numbers.1341

If we put that on the X axis basically we can go from 00.1345

We can go from 00 all the way up to 99.1362

That is our range of possibility.1366

Let us see what might the frequency be.1371

The only we have a reason to believe that 00 is more or less popular than 99.1375

We do not really have a reason to think 99 is more or less popular than 62 or 47 or 35.1381

We might be thinking about a roughly unimodal distribution where each of these are roughly equivalently popular.1390

You can continue that on, so this is probably one of those unimodal distributions where one of the numbers is way more frequent than another one.1405

Let us move on to sketch problem number 4.1425

What about the length of time students used to complete a final exam within a 50 minute class period?1428

Let us put minutes on our X axis and the frequency over here on the Y axis.1434

Now since it is a 50 minute limit, 50 is going to be the max value people are should not be allowed to use 51 or 52 minutes.1448

The numbers are probably bunched up against that wall.1460

Remember skewed distributions usually happen when there some sort of imaginary wall in this case.1466

Probably most students might take a little less time, a little more time.1473

Maybe somewhere close to 50 and maybe some students will take all the way up to 50 minutes.1480

Maybe the students will be clustered around there and probably very, very few students will finish it in like 10 minutes or 20 minutes.1488

Maybe it will look something like this.1499

Fewer students are finishing it in 10 minutes but maybe there is one fast guy who does.1513

Maybe just a few more finishing it in 20 minutes but most of the students finishing around 40 or 50 minutes.1518

That is the last example problem, thanks for using www.educator.com.1525

Hi and welcome back to www.educator.com.0000

We are going to be talking about dot plots, and histograms in Excel today.0003

First we are going to talk about going from data to dot plots.0009

Remember before we always have to go from data to frequency tables and then to some visualization.0013

Dot plots are nice because they can let you go straight from data directly to the visualization.0019

We are going to talk about going from data to histograms.0024

Histograms are going to be really helpful to us especially because a lot of times we are going to have variables that have many values.0028

We are going to talk about how do we have grouped, ungrouped values, which we have looked up before, and grouped values.0039

Finally we are just going to talk a little bit about plotting frequencies versus relative frequency.0046

Relative frequency is just a fancy way of saying it is frequency but divided by how many cases you have.0052

It is really like percentage.0060

Previously we always have to go from data and we had to stop over at making a frequency table and then go to the visualization.0065

But now with dot plots we cut out the middleman and we can go directly from data to the visualization.0073

That is a really handy thing there.0079

If you look in your Excel file the data is going to be the same as the data we have been working with, the 100 www.facebook.com friends.0085

Here is what we are going to do, before we had created a nice little graph 0094

using the Excel tools to create visualization but now we are going to use dot plots.0100

Excel will create dot plots for you directly.0108

Instead we have to sort of fidget, the fudge actually comes in handy sometimes.0111

Let us go birth month.0119

We already know that birth month should have a uniform distribution.0121

We already looked at this data before.0125

What we are going to do is just look at how to transform it directly 0127

from data into a visualization without having to use the Excel graphs or chart.0131

If you go to your birth month sheet here I have just put up the months, 1 through 12.0140

It just looks sort of like a frequency table but if you watch carefully we are going to transform it.0148

Let us go ahead and put in our regular formula for how to find frequency.0156

That is equal sign (=) because we are starting off with a function.0163

Count if because we wanted to count if that person was born in the month of January.0167

Let us put in our data.0177

If we click on data and we scroll down to months, here is birth month.0180

I'm going to select all of these rows.0188

So far it seems like we are just making a frequency table.0192

I’m going to put in a comma because I know I’m going to need that.0197

Let us go back to birth month.0201

I want you to count it if the birth month is January.0203

If we just hit enter here, that would mean we are just counting how many people are born in January.0210

We are going to do something a little bit different.0217

I want Excel to visualize for me how many people there are.0219

Not give me a number but actually show me a pictures.0224

Here is what we are going to use.0229

We are going to use the repetition function and so that is rept, you do not have to put it in capital.0230

I just wanted to distinguish it from the count if and put a parentheses because that is how you are going to put in the inputs.0238

Here Excel reminds us that we need text, whatever text you want to repeat over and over again and the number of times.0247

The text, you can pick your favorite text.0256

You just have to make sure that it is in quotation marks.0259

I'm going to put in an at symbol, that is my favorite one.0263

I’m going to close parentheses and put in a comma.0271

The beautiful thing about count if is that it is going to return to me a number.0274

The output is going to be a number.0279

If I just leave this here it will just output to me 7.0282

This function will actually read repeat this at symbol 7 times.0287

At the end of this I’m going to put a close parentheses.0296

That my parentheses match up.0302

And then I’m going to hit enter.0305

Great.0308

Let me just make these rows a little bit larger so you can see everything.0311

Here instead of having the number 7, I have 7 little symbols.0317

And you do not have to use the at sign (@) if you do not want to.0323

You could use a star, an asterisk.0327

You could use anything you want.0332

You could use an o, anything to help you see there is this many people born in the month of January.0333

This is a direct way of going into the visualization.0347

We can actually copy and paste this just like we did before.0351

All we have to do is make sure that our data is locked with our dollar signs.0357

I’m going to hit enter and all I’m going to do is take that, drag it all the way down and we have a visualization right there.0367

We do not have to make Excel do any extra work with the charts or anything like that.0377

That is the handy thing about a dot plot.0382

We are making dot plot in Excel but a lot of times you may be asked to make dot plots on paper and pen.0385

No actual statistician does that anymore but in a statistics class you maybe asked to do that.0392

The nice thing about dot plots is that you do not have to put the data in order or anything.0399

You could just go and just sort of go down the line and put a dot where 5 is, 0404

put a dot where 3 is, put a dot where 6 is, put a dot where 9 is.0412

You can see that it is a really easy way to visualize the distribution and you do not have to do anything to your data before hand.0416

But once again doing it by hand pretty tedious and people do not really do it that way.0426

At least not when they are doing real statistics.0431

Alright, that is dot plots but let us talk about the pros and cons.0435

Dots plots, the pros and cons.0445

The pro is that it is nice and quick.0447

It is quick and dirty, right.0451

You could go directly from data to dot plots, no middleman, no frequency tables.0453

You do not have to know anything even about statistics and you could do it.0458

You can only do this with small data sets.0461

It is not useful with giant data sets.