Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Loading video...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Statistics
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Bookmark and Share
Lecture Comments (2)

0 answers

Post by Elias Tessema on April 5, 2014

I am having hard time understanding about concordant rate...can you please explain what concordant pair means

0 answers

Post by George Kumar on May 11, 2012

Model planes are a good analogy. However, model houses are not a good analogy. Model houses are real. They are sometimes sought after houses.

Correlation: r vs. r-squared

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Roadmap 0:07
    • Roadmap
  • R-squared 0:44
    • What is the Meaning of It? Why Squared?
  • Parsing Sum of Squared (Parsing Variability) 2:25
    • SST = SSR + SSE
  • What is SST and SSE? 7:46
    • What is SST and SSE?
  • r-squared 18:33
    • Coefficient of Determination
  • If the Correlation is Strong… 20:25
    • If the Correlation is Strong…
  • If the Correlation is Weak… 22:36
    • If the Correlation is Weak…
  • Example 1: Find r-squared for this Set of Data 23:56
  • Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance? 33:54
  • Example 3: Why Does r-squared Only Range from 0 to 1 37:29
  • Example 4: Find the r-squared for This Set of Data 39:55

Transcription: Correlation: r vs. r-squared

Hi and welcome to

We are going to talk about the difference between r and r2.0002

First I’m going to just introduce the quantitative r2 and need to understand it.0010

Why cannot we just square r and be like that is r2.0015

We want to know what the meaning of r2. 0019

In order to get to the meaning of r2 we have to understand that sum of squared differences is actually going to split apart it to different ways.0022

We are going to learn how to parse the different parts of the sum of squared differences.0031

Then we are going to talk about what r2 means for a very strong correlation. 0035

What r2 maybe for a very weak correlation.0040

One of the reason why practically you will need to understand what r2 is that often when you do regression on the computer, 0047

either in SPSS or S data or any of this statistics packages, they will often give you r2 0056

as one of the output and you might be looking at and me like why are we doing the r2?0064

We want to know what is the meaning of it?0070

Why just r2? Why not just have r?0073

Often if you just find the correlation you will just get r but if you find the regression you will get r2.0076

It is like what is the deal?0084

R2 is really is just r2, but there is a meaning behind it.0087

I want to just stuck and say it is like the difference between feet and feet2.0094

They mean different things.0100

It is not just that you can square the number and be like it is just the numbers squared.0102

It is not just about the number it is also about the actual unit.0109

You have to understand what the unit is because feet is a measurement that examines link but square feet now gives you area.0113

Those are different things. 0130

They are obviously related to each other, but they are very different ideas.0132

Because of that you need to also not only know, like how to calculate r2, but also know the meaning of r2.0136

Again in order to understand the meaning of r2 we will need to parse the sum of squares.0148

Remember the sum of squares that we have been talking about is something like x or y and 0154

the difference between x and x bar or the difference between y and y bar.0161

Squaring all those and then adding them up, sum of squares.0167

When we say sum of squares you might hear the term that this is about variability.0172

Sum of squares talks about variability and it is because you are always getting that deviation between your data and the mean.0180

Sum of squares is often idea that is highly associated with variability.0189

Another way of thinking about parsing sum of squares is parsing variability because variability comes from a variety of sources.0199

Here we are going to talk about a couple of those sources and how to figure out this variability comes from that but this variability comes from that.0207

When you put it together you have total variability. 0216

Now total variability is going to be indicated by SST or sum of squares total.0221

This idea is all the variability in the system.0228

All of the variability.0232

We are going to take that and parse it, split apart into two pieces that are equal pieces but there just 2 different places at that variability comes from.0234

One of the sources of the variability is always from this relationship between X and Y and that can be explained by the regression line. 0246

This is sum of squares from the regression and so that can be the idea that sum of squares.0255

This one is going to be the left over sum of squares.0270

There is going to be some variability left over that is not explained by the regression line and that sum of squares error.0276

When we say error, we do not necessarily mean that we made a mistake.0294

It is not that we made a mistake.0300

Error often just means variability that is unexplained.0304

We do know where it came from.0310

We do not know if it is because there was some measurement error.0313

We do not know if there is just noise in the system.0318

We do not know if there is another variable that is causing this variation.0321

Sum of squares error just means variability that we cannot explain.0327

That does not necessarily mean that we made a mistake.0334

Often times that has to be statistics uses that word error but it does not mean that we made a mistake 0338

but it means that it just variability that we do not know where it came from.0345

There is no explanation for it.0349

To break this down you could see that this is sum of squares total and that is usually what we get from looking at the difference between y and just the mean.0353

That is like the classic sum of squares because the mean should give us some information about where y is.0365

It is what every single point is going to be at the mean.0372

That is like error but that is the total error.0376

Some of that errors, some of that variation away from the mean can be accounted for by regression like here it is farther and farther up from the mean. 0380

The numbers are bigger than the mean and then here the numbers are smaller than the mean.0391

Here this is the residual and this is we have already looked up before.0398

You can also think of it as residual error where it is the rest of the variation that is not accounted for by that nice regression line that we found.0405

We could think of this as the explained variability. 0418

This is explained and what explains the variability?0428

The regression line.0434

The regression line says it is been a very systematically like this.0436

The residual is what we call unexplained variability.0441

When another one comes from its real error just variability in the system that is caused by another variable.0447

When you put the explained variability and unexplained variability altogether you will get total variability. 0456

Let us break down specifically and mathematically what is sum of squares total or the sum of squares residual or sum of squares are?0469

I will give you a picture of what these things are.0483

First let us talk about sum of squares total.0486

One thing we probably want to do is give a rough idea of what the mean is.0490

Let us say the mean of something like this.0495

I'm just going to call that y bar because that might mean of y roughly.0499

Closer to these points but these guys are sure further down to pin it down.0504

I’m going to call that y bar and I want to know the sum of squares total.0510

Was the total variability that you see here.0518

Because we are squaring all these differences we are not just interested in that residual idea. 0522

We interested in the area of little squares.0531

It is not only the distance down but imagine that distance squared and this area.0538

That is the sum of squared variation of one point.0548

Imagine doing that with all of these.0554

You create these squares.0558

Some are big squares, some are little squares and you add up all those different areas.0561

That is sum of squares total.0578

That is the total variation in our data away from the mean. 0580

Would not it be nice if all our data looks something like the mean?0585

That would be like I can predict this data but this has more variation. 0588

I must give way over to sum of squares error because that is when we actually know.0598

In order to find sum of squared error I need the regression line.0604

I’m just going to draw a regression line like this.0609

It might not be perfect but something like that.0614

Remember how we found residual?0617

To find a residual it is just the difference between my y and y bar that my predicted y hat.0620

These are my y hat and I want to know the difference between them but we are squaring that difference.0635

Instead of just drawing a line we draw a square and imagine getting that area.0643

That is the sum of squared residual or error for one point.0652

We are going to do that with all of the points.0657

Find that area, that area, that area and add up out of all those areas then we get the sum of squared error. 0663

The variation away from the regression line.0678

This is our unexplained variation. 0685

This is our total variation.0689

Now what is this part?0692

This is the variability that is already accounted for by the regression line.0694

This is the difference between the predicted y and y bar.0700

Here is the idea.0707

If we just have y bar we not have a lot of predicted power.0710

We are just saying our y bar is just average. 0715

It is just the average and we only have one guess.0720

The average.0723

If we have the regression line we have a more mere guess.0725

If I know what x is I could tell you more closely what y might be.0730

I will try to redraw my regression line and pretend that is a nice regression.0735

Here is my y hat. 0747

Also, here is my y bar.0750

Here what I want to know is how much of the variability is simply accounted for by having this line?0761

Having this line gives us the more predictive power how much of that predictive power is it.0770

We want to know for this point this is now my difference and then I'm just to square that difference.0776

Here is another point but here is the difference. 0789

The difference is very like nothing.0795

Here is the difference.0798

It is right here, this difference.0801

Let me give another example like right here for this point this would be the difference.0812

I'm looking at all of these you can think of it as sort of the squared spaces in between my regression line and my main line.0820

I'm looking at that and that gives me how much of my variance in the data is accounted for by the regression line.0830

That is roughly the idea. 0840

Let us think about actual formulas and to help us out with that I have a more like nicely drawn variation that my crappy dots 0842

but now you could see the square differences between my actual data points and my mean.0854

Here are my square differences. 0864

Here is that same data.0866

It is the same data from before, except now we are looking at differences from the regression line not the mean line.0868

Here we are looking at differences between the mean line and the regression line.0880

Let us write these things down in formulas in terms of formulas. 0885

In order to find the sum of squares total let us think about what this is as an idea.0890

Okay, we want the sum of squares, so I know it is going to be sum of squares.0896

All of these guys are to be like this I could already write that down.0902

As this r what we call from the sum of squared and here is going to be the sum of something squared.0906

We already know that is going to be the same variability.0926

Here we have for every y give me the difference between that y and the mean and then square it and get that area. 0930

Get all these areas and add them up.0942

That just y – y bar.0945

If we want to fill this out, we would know this means for everything single point that we have get y - y bar and then square it and add them up.0953

That is the idea.0963

That is sum of squares total.0965

Sum of squares residual actually let us go over to sum of squares error. 0967

I sometimes call it also sum of squares residual because this is the idea of the residual.0975

Remember the residual was y – y hat.0982

And so, we are squaring the difference between y and y hat.0989

That is really easy.1002

Y – y hat.1004

If you want to fill it out, you could obviously put in the (i) as well just so you know you have to do that for every single point.1007

For the sum of squares for the regression I know that is why they call it sum of squares and sum of squares residual because it is confusing for the r.1016

This one is sum of squares regression.1027

I want to think of this guy as the good guy.1034

It is like you want to be able to predict X and Y and this guy helps you because he sucks up some of the variance.1037

This guy is the leftover that I do not know what to do anything about. 1043

When we talk about the regression we are talking about the difference between y hat and y bar.1047

That is y hat and y bar.1056

You could obviously do that for each point.1065

There you have it, the formulas for these but if you understand the ideas you could always intercept what is this a picture of?1075

This is a picture of the difference between the data points and y bar.1085

Here is a picture of the difference between the data points and y hat.1091

It may be confusing though which one is y hat?1095

All you should do is go back to the picture and think to yourself by telling a total variance or variance after we have the regression line. 1100

Okay, so now that you know as the SST, SSR and SSC now we can talk about r2 because you need those components.1115

R2 is often called the coefficient of determination, not coefficient of correlation squared it is often called the coefficient of determination.1125

One of the reasons that r2 is important is that it has an interpretation.1135

It is actually is talking about the proportion of total variance. 1140

Remember variance is standard deviation2.1144

Because we are talking about sum of squared the proportion of that total of variance of y explained by the simple regression model.1149

Here is the idea.1159

It is like here is all that variance and we do not know where that variance comes from. 1161

I do not know why they are all varying. 1166

We have the regression line.1168

The regression line explains where some of the variation away from the mean comes from.1169

It comes from this relationship of x.1174

Is that regression line is doing a good job then a lot of the total variance is explained by the regression line, that predicted regression y.1178

If the line is not doing a very good job then it does not explain a lot of the variation there is extra variation above and beyond that.1193

All would be very low because only a small portion of that various is accounted for.1207

Given that, let us talk about what a strong r might be and what a weak r might be. 1215

If the correlation is very strong let us think about this.1226

Whatever your sum of squares total is they are all variance.1231

Whatever that is this is going to account for a lot of it.1238

Let us say this is like 100% of the variance this accounts for 85% and so this would be small to be 15%.1244

This is of how this works. 1258

These two added up, give you the total.1260

If that is true, if the correlation is very strong this should be small and this should be large.1264

If this is small then the proportion of error over the total would be a small number.1275

Here is the formula for r2.1284

R2 is 1 – that proportion of error / the total. 1286

This is the unaccounted for error, that leftover error / the total variation.1291

This is the unexplained variation / the total variation. 1298

This number should be very, very small and when that number is very small 1 - a very small number is a number very close to 1.1303

R2 is very strong because the maximum r2 could be this 1.1312

This means that if r2 is large this means close to 1 and this means that much of the variation is accounted for by the regression line.1318

The regression line did a great job of explaining variation.1343

As we near the regression line I could tell you I can predict for you y given x.1346

It is doing a good job.1353

On the other hand, if a correlation is weak.1358

If it is weak then this is the correlation how whiny it is.1362

Even if we have a line it does not explain all the variation.1369

There is a lot of leftover variation.1374

That should be low compared to that one.1378

If this is 100% and this is not doing a very good job explaining variation. 1383

It only explains 15% of the variation then we have 85% of the variation leftover.1387

If we put the sum of squared error over the total this number should be large. 1395

There is a lot of a large proportion of that total variance is still unaccounted for, unexplained.1400

1 - a larger number, one that is closer to 1 this will be a very small number for r2.1407

R2 if it is small this means that not a lot of the variation was accounted for by the regression line.1416

The regression line did not do very good job of explaining the variation in our data.1429

Let us do some example. 1438

Previously we work with this data before for the above example data we have already found the regression line and the correlation they give it to us.1440

We could look at this and it has a negative slope and there is more rise than run.1451

Because the cost goes up really fast.1465

For every one that you go if you go up a little bit here.1469

It makes sense that the correlation is negative and strong, it is -.869 that is a pretty strong, very line-y but it had the negative slope.1474

It only gets as far.1489

It is giving us the correlation coefficient, not the coefficient of determination, r2.1492

Find r2 for the set of data and examine whether r2 once we find it in a different way by looking at r2 = 1 - the sum of squared error / sum of squared total.1497

Once we find that examine whether this is also r × r. 1514

If you download the examples provided for you below and go to example 1, here is our data and I just provided the graph for you so you could see.1523

I’m just going to move it over to the side because we are not going to need it.1534

Remember that we have this, we are going to need to calculate something in order to find the sum of squared error and the sum of squares total.1546

One thing that I like to do is remind myself if I looked at sum of squared error, if I double clicked on that what would I see inside?1557

Well, we know that the sum of squared error is whatever regression line we have and we need this distance away squared.1568

That is going to be the sum of y - y hat because this is y hat2. 1580

I know I’m going to need y hat.1592

What else are we going to need?1596

Sum of squares total is whatever my mean is.1597

Whatever my mean is I’m going to need to know the difference between my data and my mean squared.1603

My data and my mean squared, that is sum of squares total.1612

That I could easily find I should try to find y hat as well.1619

Y hat will be easy to find because we have the regression line.1626

We could just plug-in a whole bunch of x and get each y for all those x. 1630

Why do not we start there?1638

Let us find the predicted and then I'm just going to call cost per unit as my y because that was on my y axis.1643

I will talk about predicted cost per unit, predicted CPU.1650

In order to find that I need to put in my regression formula, so that is 795.207 and then subtract 21.514 and Excel will automatically do order of operations for you.1655

Multiplication comes before subtraction.1676

I’m just going to just click in x.1679

Whatever x is this is going to find me the predicted y value.1683

Once I have that I’m just going to drag down this to find all of my predicted CPU.1695

It might be actually be helpful to us to find the sum and averages of all of these.1708

I’m just going to color these in red so that I know is not part of my data.1722

I probably do not need the sum for that.1728

I need the average for these.1730

I’m also going to need the average for these.1738

We have our predicted CPU (cost per unit).1746

That is my y hat.1752

I also find my y bar, my average cost per unit.1754

Let us find the error terms square and also these variations squared.1760

Here I’m just going to write it down for myself as y - the predicted y2 and also my y - y bar2.1773

We could also write CPU - predicted CPU2 or CPU - average CPU2.1796

I am just writing it y just to save space.1805

Let me get my y - the predicted y and all of these squared.1808

Let me also do that for y and y bar.1822

Let me get the parentheses.1825

Y - y bar and all of that squared.1827

Now y bar is never going to change it so I'm just going to lock that down.1841

Once I have that I could just copy and paste these 2 cells all the way down.1850

Once I have that now I could find the sum of the residual squared as well as the sum of these deviations squared. 1863

Sum of all these guys and sum of these guys.1884

I have almost everything I need in order to find r2.1897

I have my sum here, my sum here.1902

Let us find r2.1905

R2 is going to be 1 - the sum of squared error ÷ by sum of squares total, that ratio.1910

Let us first just look at the data that we have clicked.1925

This value is smaller than this value.1928

This is 1/6.1932

Because of that 1/6 that is pretty good so we should have about 5/6 should be closer to 1 then to 0.1935

We will get .7 / 6 and so we get a pretty good r2.1947

Notice that r2 is positive even though our slope is negative because r2 does not actually talk about slope.1955

It is just the proportion of variance accounted for by the regression line.1964

It is the same 76% of the total variance is accounted for by that regression line, that majority.1970

And so that is good. 1977

Now let us try to put in r × r so we already know what r is. 1979

Let us see if r2 will give us .76.1986

So -.8692 we will get something very close and this is probably rounded and so because of that it does not give us precise numbers.1991

We do not have that precision, but is pretty close is still 76%.2009

If you have the actual r that you computed and you squared it, you would get perfectly r2.2015

We found our square for the set of data and examined whether it is r × r and it indeed is.2025

Example 2, the conceptual explanation of r62 is that it is the proportion of total variance of y explained by the simple regression model.2035

A simple regression model we just mean you only have the form y = b knot + b1.2045

It can only be aligned, it can be accrued.2060

That is what we mean by a simple linear regression. 2065

What does it mean that the simple linear regression is a model of variance explained by a simple regression model.2069

Let us think about this idea. 2086

Here we have our data set.2089

I’m just going to draw some points here.2092

These points do not exactly fall in a line.2099

That line that we made up the regression line, the regression line is really a model.2103

It is not actual data it is a theoretical model that we created from the data.2112

By model just like model airplane or model house, it is not the real houses. 2118

It is like a shining example.2133

But not only is it an example, it is idealized.2139

It is the perfect version of the world.2144

If the word are perfect and there was no error that would be a model.2146

When we say a modeling variance we are there is always variance.2152

Where does it come from?2157

When we create a model, we have a little theory of where that variance comes from and in our model here this is our theory that explains the variance.2160

Our theory is that it is a relationship between x and y and it is very small explanation. 2185

But it is this relationship between x and y that is where the variation comes from.2192

That is what we mean by the regression is lying as a model of the variance. 2197

Now the idea behind r2 is how good is this theory.2204

How good is this model?2211

Does it explain a lot of the total variation or is it a theory that does not really help us out a lot?2213

If we have a big r2, if it is fairly large and this means that our theory is pretty good.2224

Our theory explains a lot of the total variance accounted for the total variance.2231

If our r2 is very small it means our theory was not that great. 2237

We had a theory, here is a model but it is not that good.2240

It only explains a little bit of the variance.2244

Example 3, why is r2 only range from 0 to 1?2251

It might be helpful here to start off what r2 is?2256

1 - the sum of squared error / the total sum of squares / the total variance. 2262

Now let us think can SSE ever be greater than SST?2272

No it cannot, because SST by definition it equals the sum of squares from a regression and the sum of squared error.2280

This by definition have to be smaller than this and none of these can be negative because they are squared.2292

Whatever it has to be positive numbers it is actually the case that if you add 2 positive numbers together to get another positive sum 2299

and that sum has to be greater than or equal to this.2307

Either this is greater than each of these or it is equal to one of them because it could be like this is 0 and this is 100%.2317

There is just actually no way that this could be bigger than 1.2325

Not bigger than 1, bigger than SST?2336

No, cannot be.2348

This proportion have to range between 0 and 1.2351

It got to be 1 or smaller or they could be equal.2360

This could be 0 and this could be 1.2367

There is no way that this could be bigger than this.2372

Because this value only ranges from 0 to 1, 1 - something that ranges from 0 – 1, this whole thing could only range from 0 to 1.2376

Because of that r2 can only range from 0 to 1.2389

Example 4, and this is going to be a do see.2397

Find r2 for this set of data and examine whether this is also r × r.2400

Let us think about what we are going to do.2408

In order to find r × r and so r is the correlation coefficient and that is the sum of the product of z scores z sub x × z sub y and the average product of z scores.2411

We are going to find that.2436

We also have to find r2.2439

In order to find r2 that is 1 - sum of squared error / sum of squared total.2443

In order to find this, we need y hat.2449

In order to find y hat we need the regression line.2454

To find the regression line one thing we could do is once we find a correlation coefficient we could use that in order to find b1.2465

Or obviously we can also just find b1 in other ways too.2482

But this is one is a shortcut and once we find b1 we can find the intercept 1 – b1 × x.2488

We will have a whole bunch of data.2501

We have all this data.2504

Let us get started.2508

If you go to your examples and example 4, here is our data and I’m just going to move this over to the side because we are not going to be needing it for a while.2509

We already can see that it is probably can be a positive correlation if anything.2522

Let us just start by finding the correlation coefficient because it is pretty easy for us to find and once we have that we can find other things.2528

In order to get started on that it often helps to have the sum, the average, and the standard deviation.2539

I’m just going to make these all bolder in red so we know that there are different.2552

I’m going to find the sum for these.2558

We do not need the sum here though but I figured it as well.2564

It is not too hard.2569

There is the average and let us get the standard deviation because we are going to need that for the z score anyway.2570

Great. 2580

We go all up now let us find is the scores for TV watching and also the z scores for junk food.2583

It makes sense that there is this more positive correlation.2599

The more TV watch per week perhaps more junk food calories are consumed.2606

Is the correlation strong?2616

I do not know.2619

In order to find the z score we need to have the TV watching data and subtract from that the mean and I want that distance, 2620

not in terms of the raw distance, but in terms of standard deviation.2638

How many standard deviations away?2642

All divided by standard deviation.2644

Here I'm just going to lockdown the row.2649

I always use the same mean and standard deviation.2655

Once I have that I could just drag it all the way down and add it while we drag it across.2667

We forgot to find these for junk food calories.2680

Let us just double click on one of these and test it out.2687

Let us see.2692

It gives me the junk food calories - the average / the standard deviation.2693


Let us just eyeball this data for a second.2704

We see that roughly half of the z scores are negative and roughly half are positive.2707

Here too roughly half are negative and roughly half are positive.2713

We know that we did a good job at finding z scores.2717

In order to find the average product we are going to need to find the product the z(TV) × z(junk food).2719

This times this and once we have all of that we could sum these and we could find the average.2733

This divided by count how many data points that and then subtract 1.2750

We found the average and that is r.2767

Just regular of r.2770

That r it is .58, so it is not super duper weak but it is not really strongly either.2773

I’m just labeling it so that I know where it is only come out.2782

Once we have r we could find b1, b sub 1.2785

In order to find b sub 1 that will be r × the ratio between the standard deviation for y and standard deviation of x.2804

We have that right over here.2817

standard deviation for y ÷ stdev x, that proportion. 2820

And so we get the b1 is 10.75 and once we have b1 we could find b sub 0.2830

Remember, we have the point of averages, but we also have all these points.2844

You can substitute anyone of these points.2851

Any one of the points between x and predicted y.2852

You cannot substitute these points.2858

In order to get the point of averages we will get y – b1 × x.2869

Here we get the intercepts b sub knot or b sub 0 is 186.2881

Now that we have b1 and b0 we can now find predicted y.2891

Let us go up here. 2899

To help us out I am just going to color these some color so that we know that this is one is all about finding the correlation coefficient. 2904

We found the correlation coefficient. 2921

Now what we want to do is find r2.2923

And so in order to find r2 let us think about what we need. 2929

We need predicted y, predicted junk food and we could easily find that and once we have that we know we are going to need y - predicted y2.2933

That is our sum of squared error. But we also going to need y - y bar2.2958

That is going to be our total error.2967

Let us start with predicted y.2970

Predicted y is always going to be b sub y + slope × x which is TV watching.2973

We will lock down b sub knot and the slope b sub 1 because do not want that to move.2992

Once we have that we could find (y - the predicted y)2 .3006

And then finally we want to find (y - the average y)2.3033

We want this average to be locked in place in order to move.3052

Once we have all of those 3 pieces we could just do the easy job of copying and pasting all the way down.3062

Once we do that, we could sum these up because we are going to need to have 3074

the sum of squared residual I’m going to need the sum of squared deviation from the mean.3081

In order to find the sum I could just copy and paste that.3093

Once we have the sum I can now find r2.3100

I can just put in 1 – SSE / SST.3115

Let us see.3127

I will get .3377.3129

The regression line accounts for about 34% of the variation.3133

Let us see.3142

Is this r × r?3144

Is that going to be the same thing?3148

We have r we can just scroll and we get exactly 34%.3151

If we get a question like this, Excel can help.3160

Thanks for watching