Regression testing in R

T-test and ANOVA, are two parametric statistical techniques used to test the hypothesis. When the population means of only two groups is to be compared, the t-test is used, but when means of more than two groups are to be compared, ANOVA is used.

Here are my previous posts about ANOVA

These T-Test and ANOVA belongs to General Linear Model (GLM) family.

So we can compare 2 groups with ANOVA. If we have more than 2 groups, we shall use Regression. We can have multiple IV on DV. ANOVA allows Categorical IV only. But regression allows both Categorical and Continuous data, in addition to multiple IV. DV should be continuous.

We need to check correlation before getting into regression. If we do not have regression or poor correlation, lets not think about regression. I have written about correlation in the following posts.

While correlation shows is degree of relation (+ve or -ve), regression shows us the correlation and sign of causation. So we are going to estimate DV based on changes to IV.

Let’s take the same salary data used in my previous examples.

> setwd("d:/gandhari/videos/Advanced Business Analytics/")
> sal <-read.csv("sal.csv")
> head(sal)
  id gender      educ Designation Level Salary Loan.deduction Last.drawn.salary Pre..Exp Ratings.by.interviewer
1  1 female        UG Jr Engineer   JLM  10000        5901.74           4098.26        3                      4
2  2   male DOCTORATE    Chairman   TLM 100000        4247.31          95752.69       20                      4
3  3   male   DIPLOMA       Jr HR   JLM   6000        3895.76           2104.24        1                      3
4  4   male        PG    Engineer   MLM  15000        9108.36           5891.64        7                      2
5  5 female        PG Sr Engineer   MLM  25000        4269.39          20730.61       12                      4
6  6   male   DIPLOMA Jr Engineer   JLM   6000        4137.31           1862.69        1                      1
> dim(sal)
[1] 50 10
> str(sal)
'data.frame':	50 obs. of  10 variables:
 $ id                    : int  1 2 3 4 5 6 7 8 9 10 ...
 $ gender                : Factor w/ 2 levels "female","male": 1 2 2 2 1 2 2 1 1 1 ...
 $ educ                  : Factor w/ 4 levels "DIPLOMA","DOCTORATE",..: 4 2 1 3 3 1 1 3 3 3 ...
 $ Designation           : Factor w/ 6 levels "Chairman","Engineer",..: 4 1 5 2 6 4 3 2 2 2 ...
 $ Level                 : Factor w/ 3 levels "JLM","MLM","TLM": 1 3 1 2 2 1 1 2 2 2 ...
 $ Salary                : int  10000 100000 6000 15000 25000 6000 8000 13000 14000 16000 ...
 $ Loan.deduction        : num  5902 4247 3896 9108 4269 ...
 $ Last.drawn.salary     : num  4098 95753 2104 5892 20731 ...
 $ Pre..Exp              : int  3 20 1 7 12 1 2 7 7 8 ...
 $ Ratings.by.interviewer: int  4 4 3 2 4 1 4 3 2 4 ...
 > tail (sal, n=10)
   id gender      educ  Designation Level Salary Loan.deduction Last.drawn.salary Pre..Exp Ratings.by.interviewer
41 41   male        PG     Engineer   MLM  15000        1741.33          13258.67        7                      2
42 42 female        PG  Sr Engineer   MLM  25000        2934.33          22065.67       12                      4
43 43   male   DIPLOMA  Jr Engineer   JLM   6000        2803.03           3196.97        1                      1
44 44   male   DIPLOMA Jr Associate   JLM   8000        5480.77           2519.23        2                      4
45 45 female        PG     Engineer   MLM  13000        1317.26          11682.74        7                      3
46 46 female        PG     Engineer   MLM  16000        9927.11           6072.89        8                      4
47 47 female        UG  Jr Engineer   JLM  10000        2507.66           7492.34        3                      4
48 48   male DOCTORATE     Chairman   TLM 100000        9684.88          90315.12       20                      4
49 49   male   DIPLOMA        Jr HR   JLM   6000        2717.26           3282.74        1                      3
50 50   male        PG     Engineer   MLM  15000        4512.12          10487.88        7                      2

Hope the preview of the data set I’ve given above makes sense.
To predict, we should have two type of data – training data and testing data

> salarytrain <-sal[1:35,]
> salarytest <- sal[36:50,]
> dim (salarytrain)
[1] 35 10
> dim (salarytest)
[1] 15 10

Let’s run the regression now.

> salreg <- lm(Salary~educ, data=salarytrain)
> summary(salreg)

Call:
lm(formula = Salary ~ educ, data = salarytrain)

Residuals:
    Min      1Q  Median      3Q     Max
-4333.3 -2333.3  -727.3   636.4  7666.7 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept)       6727       1128   5.962 1.37e-06 ***
educDOCTORATE    93273       2438  38.264  < 2e-16 ***
educPG           10606       1432   7.405 2.44e-08 ***
educUG            3273       2438   1.343    0.189
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3742 on 31 degrees of freedom
Multiple R-squared:  0.9803,	Adjusted R-squared:  0.9783
F-statistic: 513.1 on 3 and 31 DF,  p-value: <; 2.2e-16

The formula for the prediction is given below.

  • Y = a + b1 * X1 + c
  • Y is DV
  • X1 is IV
  • a is intercept or baseline or constant
  • b1 is error value.

Let’s substitute the values.
Predicted Y = 6727 + 1128 * Education + code

R square value is 0.9803, which is 98.03. This is a high level of correlation. 98% influence of explained variance between education and salary. Remaining 2% is unexplained variance.

Intercept 6727 is the baseline, which means, a person with no education may get 6727 salary.

when he gets 1st level of education, he will get 6727+1128.
when he gets 2nd level of education, he will get 6727+(2 x 1128) and so on.

We have considered only the education in this example. Plus point of regression is, we shall use more than one IV. In this case, I want to consider years of experience in addition to education. Then my command goes as below.

> salexp <- lm(Salary~educ + Pre..Exp, data=salarytrain)
> summary(salexp)

Call:
lm(formula = Salary ~ educ + Pre..Exp, data = salarytrain)

Residuals:
    Min      1Q  Median      3Q     Max
-886.44 -102.30   34.25   78.50 1113.56 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept)     3705.6      162.5   22.80  < 2e-16 ***
educDOCTORATE  51977.1     1019.4   50.99  < 2e-16 ***
educPG         -5330.2      417.5  -12.77 1.17e-13 ***
educUG          -353.2      327.2   -1.08    0.289
Pre..Exp        2215.9       52.0   42.61  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 485 on 30 degrees of freedom
Multiple R-squared:  0.9997,	Adjusted R-squared:  0.9996
F-statistic: 2.336e+04 on 4 and 30 DF,  p-value: < 2.2e-16

This time I get 99.97% influence of education and experience in deciding someones salary. If you see the signs of estimate, Education UG or PG does not make a big difference. But previous experience and DOCTORATE surge our R square value. If R square score is low, your correlation is weak. Do not use prediction in this case or search for right IVs.

Let’s predict the salary now.

> salpred <- predict(salreg, salarytest)
> salpred
        36         37         38         39         40         41         42         43         44         45         46
 17333.333  17333.333  10000.000 100000.000   6727.273  17333.333  17333.333   6727.273   6727.273  17333.333  17333.333
        47         48         49         50
 10000.000 100000.000   6727.273  17333.333

So this is the prediction of salaries from rows 36 to 50.

Let’s use cbind for better understanding. What you see as Salary is the actual salary. What you see under salpred is predicted salary. In some cases, the prediction is close, in some cases, it is far. So difference between actual salary (actual Y) and predicted salary (predicted Y) is called residual. Residual should be lower to have better prediction.

> cbind(salarytest, salpred)
   id gender      educ  Designation Level Salary Loan.deduction Last.drawn.salary Pre..Exp Ratings.by.interviewer    salpred
36 36 female        PG     Engineer   MLM  14000         716.48          13283.52        7                      2  17333.333
37 37 female        PG     Engineer   MLM  16000        6595.95           9404.05        8                      4  17333.333
38 38 female        UG  Jr Engineer   JLM  10000        5433.07           4566.93        3                      4  10000.000
39 39   male DOCTORATE     Chairman   TLM 100000        9028.68          90971.32       20                      4 100000.000
40 40   male   DIPLOMA        Jr HR   JLM   6000         794.66           5205.34        1                      3   6727.273
41 41   male        PG     Engineer   MLM  15000        1741.33          13258.67        7                      2  17333.333
42 42 female        PG  Sr Engineer   MLM  25000        2934.33          22065.67       12                      4  17333.333
43 43   male   DIPLOMA  Jr Engineer   JLM   6000        2803.03           3196.97        1                      1   6727.273
44 44   male   DIPLOMA Jr Associate   JLM   8000        5480.77           2519.23        2                      4   6727.273
45 45 female        PG     Engineer   MLM  13000        1317.26          11682.74        7                      3  17333.333
46 46 female        PG     Engineer   MLM  16000        9927.11           6072.89        8                      4  17333.333
47 47 female        UG  Jr Engineer   JLM  10000        2507.66           7492.34        3                      4  10000.000
48 48   male DOCTORATE     Chairman   TLM 100000        9684.88          90315.12       20                      4 100000.000
49 49   male   DIPLOMA        Jr HR   JLM   6000        2717.26           3282.74        1                      3   6727.273
50 50   male        PG     Engineer   MLM  15000        4512.12          10487.88        7                      2  17333.333

See you in another interesting post.

 

 

 

 

Advertisements

Multiple ANOVA, Post hoc test using R

I have written about how to run the ANOVA test in my previous post Analysis of Variance ANOVA using R. We analyzed the salary difference between different level of education.

For ease of (my!) understanding, I would take the same data set in this post as well. So here is the same data set.

> sal
   id gender      educ  Designation Level Salary Last.drawn.salary Pre..Exp Ratings.by.interviewer
1   1 female        UG  Jr Engineer   JLM  10000              1000        3                      4
2   2   male DOCTORATE     Chairman   TLM 100000            100000       20                      4
3   3   male   DIPLOMA        Jr HR   JLM   6000              6000        1                      3
4   4   male        PG     Engineer   MLM  15000             15000        7                      2
5   5 female        PG  Sr Engineer   MLM  25000             25000       12                      4
6   6   male   DIPLOMA  Jr Engineer   JLM   6000              8000        1                      1
7   7   male   DIPLOMA Jr Associate   JLM   8000              8000        2                      4
8   8 female        PG     Engineer   MLM  13000             13000        7                      3
9   9 female        PG     Engineer   MLM  14000             14000        7                      2
10 10 female        PG     Engineer   MLM  16000             16000        8                      4
11 11 female        UG  Jr Engineer   JLM  10000              1000        3                      4
12 12   male DOCTORATE     Chairman   TLM 100000            100000       20                      4
13 13   male   DIPLOMA        Jr HR   JLM   6000              6000        1                      3
14 14   male        PG     Engineer   MLM  15000             15000        7                      2
15 15 female        PG  Sr Engineer   MLM  25000             25000       12                      4
16 16   male   DIPLOMA  Jr Engineer   JLM   6000              8000        1                      1
17 17   male   DIPLOMA Jr Associate   JLM   8000              8000        2                      4
18 18 female        PG     Engineer   MLM  13000             13000        7                      3
19 19 female        PG     Engineer   MLM  14000             14000        7                      2
20 20 female        PG     Engineer   MLM  16000             16000        8                      4
21 21 female        PG  Sr Engineer   MLM  25000             25000       12                      4
22 22   male   DIPLOMA  Jr Engineer   JLM   6000              8000        1                      1
23 23   male   DIPLOMA Jr Associate   JLM   8000              8000        2                      4
24 24 female        PG     Engineer   MLM  13000             13000        7                      3
25 25 female        PG     Engineer   MLM  14000             14000        7                      2
26 26 female        PG     Engineer   MLM  16000             16000        8                      4
27 27 female        UG  Jr Engineer   JLM  10000              1000        3                      4
28 28   male DOCTORATE     Chairman   TLM 100000            100000       20                      4
29 29   male   DIPLOMA        Jr HR   JLM   6000              6000        1                      3
30 30   male        PG     Engineer   MLM  15000             15000        7                      2
31 31 female        PG  Sr Engineer   MLM  25000             25000       12                      4
32 32 female        PG  Sr Engineer   MLM  25000             25000       12                      4
33 33   male   DIPLOMA  Jr Engineer   JLM   6000              8000        1                      1
34 34   male   DIPLOMA Jr Associate   JLM   8000              8000        2                      4
35 35 female        PG     Engineer   MLM  13000             13000        7                      3
36 36 female        PG     Engineer   MLM  14000             14000        7                      2
37 37 female        PG     Engineer   MLM  16000             16000        8                      4
38 38 female        UG  Jr Engineer   JLM  10000              1000        3                      4
39 39   male DOCTORATE     Chairman   TLM 100000            100000       20                      4
40 40   male   DIPLOMA        Jr HR   JLM   6000              6000        1                      3
41 41   male        PG     Engineer   MLM  15000             15000        7                      2
42 42 female        PG  Sr Engineer   MLM  25000             25000       12                      4
43 43   male   DIPLOMA  Jr Engineer   JLM   6000              8000        1                      1
44 44   male   DIPLOMA Jr Associate   JLM   8000              8000        2                      4
45 45 female        PG     Engineer   MLM  13000             13000        7                      3
46 46 female        PG     Engineer   MLM  16000             16000        8                      4
47 47 female        UG  Jr Engineer   JLM  10000              1000        3                      4
48 48   male DOCTORATE     Chairman   TLM 100000            100000       20                      4
49 49   male   DIPLOMA        Jr HR   JLM   6000              6000        1                      3
50 50   male        PG     Engineer   MLM  15000             15000        7                      2

We have already executed ANOVA test. Following is the output.

> aov1 <-aov(Salary~educ, data=sal)
> aov1
Call:
   aov(formula = Salary ~ educ, data = sal)

Terms:
                       educ   Residuals
Sum of Squares  35270186667   538293333
Deg. of Freedom           3          46

Residual standard error: 3420.823
Estimated effects may be unbalanced
> summary(aov1)
            Df    Sum Sq   Mean Sq F value Pr(>F)    
educ         3 3.527e+10 1.176e+10    1005 <2e-16 ***
Residuals   46 5.383e+08 1.170e+07                   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

 

Variance between groups

What we get above is the overall significant difference between DV (salary) and IV (Education)

To compare the difference between the group, we use Post hoc test. We shall use TukeyHSD() for this. We need to go for this approach, only if the anova is significant. If anova is not significant, there is no need for posthoc.

We’d see how to run get the variances across the groups in this post.

> tukey <- TukeyHSD(aov1)
> tukey
  Tukey multiple comparisons of means
    95% family-wise confidence level

Fit: aov(formula = Salary ~ educ, data = sal)

$educ
                        diff        lwr        upr     p adj
DOCTORATE-DIPLOMA  93333.333  88624.720  98041.947 0.0000000
PG-DIPLOMA         10373.333   7395.345  13351.322 0.0000000
UG-DIPLOMA          3333.333  -1375.280   8041.947 0.2477298
PG-DOCTORATE      -82960.000 -87426.983 -78493.017 0.0000000
UG-DOCTORATE      -90000.000 -95766.850 -84233.150 0.0000000
UG-PG              -7040.000 -11506.983  -2573.017 0.0006777
  • diff – mean difference between education level
  • lwr – lower mean
  • upr – upper mean

If signs between lwr and upr are same, irrelevant of + or -, that denotes significant difference.

When you compare diploma (lower degree) with doctorate (higher degree), the difference would be +ve and vice versa. If you just want to see the difference, + or – is not significant.

Let’s plot this in a graph.

> plot(tukey)

01 TukeyHSD Post hoc test

0 is the mid point. So, anything near 0 do not have significant difference.

From the top, first plot is for the comparison between DOCTORATE-DIPLOMA. You would see a high positive difference. If you see the plot for UG-DOCTORATE, is it second highest difference, but this is negative difference. Anything near 0 like UG-DIPLOMA, does not have significant difference.

ANOVA between multiple variables

We received a new data set from company now, which has a new column Loan.deducation. Last.drawn.salary changes with respect to his loans.

> sal <- read.csv("sal.csv")
> head(sal)
  id gender      educ Designation Level Salary Loan.deduction Last.drawn.salary Pre..Exp Ratings.by.interviewer
1  1 female        UG Jr Engineer   JLM  10000        5901.74           4098.26        3                      4
2  2   male DOCTORATE    Chairman   TLM 100000        4247.31          95752.69       20                      4
3  3   male   DIPLOMA       Jr HR   JLM   6000        3895.76           2104.24        1                      3
4  4   male        PG    Engineer   MLM  15000        9108.36           5891.64        7                      2
5  5 female        PG Sr Engineer   MLM  25000        4269.39          20730.61       12                      4
6  6   male   DIPLOMA Jr Engineer   JLM   6000        4137.31           1862.69        1                      1

Company wants to see the differences among Salary (column 6), Loan.deduction (column 7) and Last.drawn.salary (column 8). We combine apply and anova as given below.

> aovset <- apply(sal[,6:8], 2, function(x)aov(x~educ, data = sal))
  • sal[,6:8] takes all rows of columns 6, 7 and 8
  • aov is our function

Following is the variance between education and Last.drawn.salary.

> summary(aovset$Last.drawn.salary)
            Df    Sum Sq   Mean Sq F value Pr(>F)    
educ         3 3.342e+10 1.114e+10   674.2 <2e-16 ***
Residuals   46 7.602e+08 1.653e+07                   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

F value is 674, which means, the change is significant. Following would be more interesting.

> summary(aovset$Loan.deduction)
            Df    Sum Sq Mean Sq F value Pr(>F)
educ         3  25577616 8525872    1.14  0.343
Residuals   46 343898395 7476052

F value for Loan.deduction is lesser than 4. So, there is no change in the deductions between different education level.

See you in another interesting post.

Calculating Anova with LibreOffice Calc

I have written about the manual way of calculating one-way anova in One way analysis of variance. We shall use LibreOffice (or MS Excel) to calculate it quickly. I’d explain with Libre Office below.

Choose your data set.

01 data set for anova

1. Define the NULL (H0) and alternative (HA) hypothesis.

H0: there is no difference between three conditions with loan scheme.

μ ₹10,000.00 = μ ₹15,000.00 = μ ₹20,000.00

H1: There is a difference. Not all μs are equal.

2. Define the alpha value for type I error.

α = 5% = 0.05.

Select Data>Statistics>Anova

02 anova menu in libreoffice

3. Choose your result cell

03

04 anova result cell in libreoffice

05 anova result cell in libreoffice

4. And Here is your output

06 anova result in libreoffice

Groups Count Sum Mean Variance
Column 1 7 57 8.1429 0.4762
Column 2 7 47 6.7143 0.5714
Column 3 7 21 3 0.6667
Source of Variation SS df MS F P-value F critical
Between Groups 98.6667 2 49.3333 86.3333 5.9563E-10 3.5546
Within Groups 10.2857 18 0.5714
Total Err:508 20

86.33 3.5546, hence the null hypothesis is rejected, alternate hypothesis is accepted.

One way analysis of variance

I’d be writing about ANOVA in this post, after my previous post on Skew & kurtosis. ANOVA is a technique to perform statistical intervention on one or more than two populations at same time to analyze the data effectively.

Though there are several other types in anova, I’d discuss about One-way anova in this post.

What is Hypothesis?

A Hypothesis is a tentative statement about relationship between two or more variables. It is a specific, testable prediction about what do you expect to happen in your investigation.

Types of Hypothesis

Following is the major types of statistical hypothesis.

H0: Null Hypothesis: It is usually hypothesis that sample the observations based on chance.

H1: Alternate Hypothesis: It is the hypothesis that sample observations are influenced by some non-random cause.

When we collect the air quality of Mumbai in 2016, a null hypothesis may be like this – there is no change in quality between second and third quarters of 2016. An alternate hypothesis H1 may be, the quality is poorer in third quarter of 2016.

What is a Hypothesis testing?

Hypothesis testing is a process to prove or disprove the research question. By allowing an error of 5% or 1% (α alpha values), the researcher can conclude that result may be real if chance alone could produce the same result only 5% or 1% of the time or less.

Let’s take a research question..

Is the mean salary of an IT family in Bangalore equal to ₹ 40000?

we need to write this question in terms of null (H0) and alternate (HA) hypothesis.

The null hypothesis is, μ = ₹ 40000.

H0: μ = ₹ 40000

The alternative hypothesis is μ ₹ 40000

HA: μ ₹ 40000

HA suggests us that the salary may be lesser than 40000, or greater than 40000. We call this phenomenon as two-tailed tests.

Type I, Type II errors

Generally we may ignore certain percentage of measurements, usually these are peak measurements. We call them as Type I error represented by α (alpha). Generally it would be 5% (0.05) or 1% (0.01)

A Type II error is used to identify the causes to reject a false null hypothesis. It is generally good not to ignore Type II errors.

Calculating ANOVA – manual approach

Let’s take a simple data. Banks gives different loan amounts to entrepreneurs of three different investment slabs. It gives ₹10000, ₹15000 and ₹20000 respectively.

Following is the returns obtained by each slabs.

₹10,000.00 ₹15,000.00 ₹20,000.00
9 7 4
8 6 3
7 6 2
8 7 3
8 8 4
9 7 3
8 6 2

1. Define the NULL (H0) and alternative (HA) hypothesis.

H0: there is no difference between three conditions with loan scheme.

μ ₹10,000.00 = μ ₹15,000.00 = μ ₹20,000.00

H1: There is a difference. Not all μs are equal.

2. Define the alpha value for type I error.

α = 5% = 0.05.

3. Determine the degree of freedom (DF)

Number of samples N = 21

Number of groups/columns/levels/contitions a  = 3

Number of rows n = 7

Degree of freedom between columns dfbetween = a – 1 = 3 – 1  =2.

Degree of freedom between rows dfbetween = N – a = 21 – 3 = 18.

Degree of freedom for all the data (total) dftotal = N – 1 = 21 – 1 = 20.

4. Decision rules

To look up the critical value, two df values need to be used.

Look at the F table for the critical value using v1, v2, ie., (2, 18) (alpha = 0.05)

Next, Look at the statistical table for 2 in v1 column and 18 in V2 row. We find 3.555. —-(0)

So the rule is, if F (calculated value) is greater than 3.555 (F table value), reject the null hypothesis or else, accept the null hypothesis.

5. Calculate the statistics

₹10,000.00 ₹15,000.00 ₹20,000.00
Sample x
Sample x ²
Sample i
Sample i² Sample j
Sample j²
9 81 7 49 4 16
8 64 6 36 3 9
7 49 6 36 2 4
8 64 7 49 3 9
8 64 8 64 4 16
9 81 7 49 3 9
8 64 6 36 2 4
ΣTi = 57 ΣTi = 47 ΣTi = 21
Σx² = 467 Σi² = 319 Σj² = 67

Sum of all samples T = ΣTi = 57 + 47 + 21 = 125 ————- (1)

So 125 is the total sum of all samples.

ΣΣxij²= 467 + 319 + 67 = 853 ———– (2)

Using (1) and (2),

Q = ΣΣxij²-T²/N

Q = 853 – 125²/21

Q = 853 – 15625/21

Q = 853 – 744.04761904761904761904761904762

Q = 108.952380952381 ———————-(3)

Q1 = Σ(Ti²/ni)-T²/N

Q1 = (Σ(57 + 47 + 21)/7) – (125²/21)

Q1 = 98.6666666666666 —————–(4)

Q2 = Q – Q1

Q2 = (3) – (4)

Q2 = 10.2857142857143 —————-(5)

Anova Table

Source of variations (SV) Sum of squares (SS) Degrees of freedom (df) Mean Square (MS) Variance Ratio (F)
Between classes Q1 h-1 Q1/h-1 MSbetween/MSwithin
Within Classes Q2 N-h Q2/N-h
Total Q N-1

Lets substitute the values in the above table.

We already calculated the values for Q1, Q2 and Q above.

h is the number of columns, which is 3.

h = 3 —————(6)

So,

h-1 = 2 ———–(7)

N-h = 21-3 = 18 ———–(8)

N – 1 = 20 —————-(9)

MSbetween = Q1/h1 = 98.67/2 = 49.335 ————(10)

MSwithin = Q2/N-h = 10.29/18 = 0.57167 ————-(11)

Finally our F value would be,

F = MSbetween/MSwithin

F = 49.335/0.57167 = 86.2998 —————–(12)

Source of variations (SV) Sum of squares (SS) Degrees of freedom (df) Mean Square (MS) Variance Ratio (F)
Between classes 98.67 2 49.335 86.2998
Within Classes 10.29 18 0.57167
Total 10.28 20

Conclusion

Our calculated F value (variance ratio) is 86.2998

Statistical value is 3.55 as per (0)

Hence calculated F value is greater than statistical value.

F(2, 18) = 86.2998 3.55. Hence our null hypothesis is rejected and alternate hypothesis is accepted. There is a difference between the loan schemes. Not all μs are equal.