Inferential Statistics(09)-R[05]-Multiple Regression

 

Inferential Statistics(08)-R[05]-Multiple Regression

1. The Regression Coefficients–lm(response ~ predictor1 + predictor2)

So now we know our regression model, we need to find a parameter estimates for the intercept, the slope of smiling, and the slope of money. We’re going to use the lm() function to find these estimates.

To find the parameter estimates for just money and liking we used lm(liking ~ money). We can add another predictor simply using +. For instance, lm(response ~ predictor1 + predictor2).

  • Instructions

    • In your script, use the lm() function to find the regression coefficients for our model examining how money and smiling predicts how much someone likes us.
    • Make sure that smile is the first predictor, followed by money.
    • Feel free to experiment with this in your console before submitting your script!
  • Answers

    # Vector containing the amount of money you gave participants (predictor)
    money  <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Find the regression coefficients for liking, predicted by smile and money
    lm(liking~money+smile)
    
    # Vector containing the amount of money you gave participants (predictor)
    money  <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
    # Find the regression coefficients for liking, predicted by smile and money
    lm(liking~money+smile)
      
    Call:
    lm(formula = liking ~ money + smile)
      
    Coefficients:
    (Intercept)        money        smile  
         0.6162       0.8008       1.4895  
    

2. Multiple R squared I–lm().summary()

The R squared represents the proportion of improvement in the model from using the regression line over using the mean. In our case, it looks at how we can predict how much someone will like us based on how much money we give them, and how much we smile at them, compared to just using the average amount that someone likes us.

R gives us this automatically when we use summary() on the model we found using lm(). summary() tells us a lot about the model. To extract the R squared in particular, we can use the $. For example, summary(*regression model*)$r.squared would return the R squared value. Let’s try this!

  • Instruction

    • In your console, use the summary() function on our lm() model, and assign the resulting object to variable mod1.
    • In your console, use the $ to print the value of the R-squared from mod1.
  • Answers

    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Assign the summary of lm(liking ~ smile + money) to 'mod1'
    mod1 <- summary(lm(liking ~ smile + money))
    mod1
    # Print the R-squared of 'mod1'
    mod1$r.squared
    
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
    # Assign the summary of lm(liking ~ smile + money) to 'mod1'
    mod1 <- summary(lm(liking ~ smile + money))
    mod1
      
    Call:
    lm(formula = liking ~ smile + money)
      
    Residuals:
        Min      1Q  Median      3Q     Max 
    -2.3173 -0.5423 -0.1371  0.2068  3.6332 
      
    Coefficients:
                Estimate Std. Error t value Pr(>|t|)   
    (Intercept)   0.6162     1.5449   0.399  0.70189   
    smile         1.4895     1.7123   0.870  0.41319   
    money         0.8008     0.1894   4.229  0.00389 **
    ---
    Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
      
    Residual standard error: 1.704 on 7 degrees of freedom
    Multiple R-squared:  0.7196,	Adjusted R-squared:  0.6395 
    F-statistic: 8.984 on 2 and 7 DF,  p-value: 0.01167
    # Print the R-squared of 'mod1'
    mod1$r.squared
    [1] 0.7196303
    

3. Multiple R squared II–lm(y ~ X1 + X2)$residual

We know that the multiple R squared for our model is 0.7196303 - this means that our model including money and smiling is about 72% better than just using the mean to predict how much someone will like us.

Let’s break down how R found this value to calculate it manually ourselves.

To find this we need to look at the difference between the mean amount that our sample liked us, compared to the observed amount they liked us - if we square and sum this value we have the total sum of squares. We can also look at the difference between the predicted amount that our sample liked us according to our model, compared to the observed observed amount they liked us - if we square and sum this value we have the residual sum of squares.

R gives us the residuals from our model automatically. We can index it from lm() using the $. For example, lm(y ~ X1 + X2)$residual. Easy!

The total sum of squares is pretty straight forward too. We find the mean of liking using the function mean() - for example mean(y) and subtract every value of y from this. For example, mean(y) - y.

The values produced by R come out as a vector of numbers. To square these values we can use ^2, and to sum we put the vectors inside the function sum(). For example,sum((mean(y) - y)^2). If we find these values and assign them to objects we can do anything we like with them!

  • Instructions

    • Obtain the residual sum of squares and assign to object ssr. Remember to use brackets to square and sum!
    • Obtain the total sum of squares and assign to object sst. Again, remember to use brackets to square and sum!
    • Use sst and ssr to calculate the multiple R squared.
  • Answers

    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Sum and square the residuals from lm(liking ~ smile + money), assign to object 'ssr'
    ssr <- sum(lm(liking~smile+money)$residual^2)
    ssr
    # Sum and square the value of mean liking - observed liking, assign to object 'sst'
    sst <- sum((mean(liking)-liking)^2)
    sst
    # Find the R squared as (sst - ssr)/sst
    (sst-ssr)/sst
    
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
    # Sum and square the residuals from lm(liking ~ smile + money), assign to object 'ssr'
    ssr <- sum(lm(liking~smile+money)$residual^2)
    ssr
    [1] 20.32007
    # Sum and square the value of mean liking - observed liking, assign to object 'sst'
    sst <- sum((mean(liking)-liking)^2)
    sst
    [1] 72.476
    # Find the R squared as (sst - ssr)/sst
    (sst-ssr)/sst
    [1] 0.7196303
    

4. Overall Tests rms/mse

The F test statistic represents the ‘explained’ variance divided by the ‘error’ variance. In our experiment on how much giving people money and smiling at them predicts them liking us, R tells us that the F-statistic is 8.984.

Let’s see how R found this!

The F-statistic is calculated by dividing the regression mean square by the mean square error.

The regression mean square is calculated by:

  • \[\frac{SS_{total} - SS_{residual}}{k-1}\]
    • where k is the number of predictors plus one for the intercept.

The mean square error is calculated by

  • \[\frac{SS_{residual}}{n-k}\]
    • where n is the number of observations and kk is the number of predictors plus one for the intercept.

Luckily for us we already found the total sum of squares (sst) and the residual sum of squares (ssr) when we found the R squared. Let’s rearrange these to find the F-statistic.

  • Instructions

    • Assign the value of the regression mean square to object rms.
    • Assign the value of the mean square error to object mse.
  • Answers

    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Sum and square the residuals from lm(liking ~ smile + money), assign to object 'ssr'
    ssr <- sum((lm(liking~ smile + money)$residual) ^ 2)
      
    # Sum and square the value of mean liking - observed liking, assign to object 'sst'
    sst <- sum((mean(liking) - liking) ^ 2)
      
    # Find the regression mean square
    rms <- (sst-ssr)/(3-1)
        
    # Find the mean squared error
    mse <- ssr/(10-3)
        
    # Use rms and mse to find the F statsitic
    rms/mse
    
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
    # Sum and square the residuals from lm(liking ~ smile + money), assign to object 'ssr'
    ssr <- sum((lm(liking~ smile + money)$residual) ^ 2)
    # Sum and square the value of mean liking - observed liking, assign to object 'sst'
    sst <- sum((mean(liking) - liking) ^ 2)
    # Find the regression mean square
    rms <- (sst-ssr)/(3-1)
        
    # Find the mean squared error
    mse <- ssr/(10-3)
        
    # Use rms and mse to find the F statsitic
    rms/mse
    [1] 8.983518
    

5. Individual Tests of Predictors–confint()

We saw that R gave us the parameter estimates when we entered summary(lm(liking ~ smile + money)).

We can calculate the confidence interval as the:

  • \[coefficient \; value \pm margin \;of \;error\]
    • where

      • \[margin \; of \; error = t * se\]

You’ve done this manually before. Let’s try a faster way - using the function confint(). This function takes your fitted model as its first argument. Don’t be scared by this, it just means it wants the output of your model lm(liking ~ smile + money). It then produced a two-sided confidence interval, the size of which you specify using the argument level (e.g level = 0.95 is 95%). Let’s try it out!

  • Instructions

    • In your script, the fitted model is already assigned to mod.
    • In your script, find the confidence interval of mod by placing it between brackets inside confint().
    • The default setting of confint() is to produce the 95% confidence interval, however we are going to find the 90% confidence interval by adding the argument level = 0.9 inside the brackets.
  • Answers

    # Fited model
    mod <- lm(liking ~ smile + money)
    confint(mod)
                     2.5 %   97.5 %
    (Intercept) -3.0370146 4.269436
    smile       -2.5594405 5.538505
    money        0.3529717 1.248529
      
    # Find confidence interval of fitted model
    confint(mod,level=0.9)
                       5 %     95 %
    (Intercept) -2.3108177 3.543239
    smile       -1.7545761 4.733641
    money        0.4419823 1.159519
      
    confint(mod,level=0.9, "smile")
                5 %     95 %
    smile -1.754576 4.733641
    

6. Assumptions I–Linearity&Homoscedasticity

Before we can use regression analyses we need to make sure that our data fits all necessary assumptions.

The assumption of linearity says that the predictors should be linearly related to the response variable. As we cannot really do this when we have more than one predictor, we are going to look at the residuals against the response variable. We would hope to see the residuals move in a straight line.

While we are looking at these residuals, we can also check out the assumption of homoscedasticity. To check this we should make sure the variation in residuals is roughly even at all levels of the predictor.

We can get the residuals from the summary() of our model, using $residuals. For example: summary(model)$residuals. We can then plot the residuals against the predictors using plot(), which takes the x-axis data as the first argument, and the y-axis data as the second argument.

  • Instruction

    • In your script, index the residuals from your model using $, and assign them to the object resmod
    • In your script, plot resmod alongside smile and money using the function plot().
    • The first argument of plot() is the data that goes on the x-axis, and the second is the data on the y-axis. For example: plot(x_data, y_data).
  • Answers

    # sample code# Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Assign regression model to object "mod"
    mod <- lm(liking ~ smile + money)
      
    # Obtain the residuals from mod using $, assign to "resmod"
    resmod <- mod$residuals
      
    # Plot the 'smile' on the x-axis, with the residuals on the y-axis
    plot(smile, resmod)
      
      
    # Plot the 'money' on the x-axis, with the residuals on the y-axis
    plot(money, resmod)
      
    

7. Assumptions II–Independent errors&Sufficient observations&Outliers(residuals/SD)

Let’s pretend that we obtained our data randomly from independent participants. That means we have addressed the assumption of independent errors.

Another assumption that we can tell definitively is sufficient observations. We have ten observations and two predictors - we know already that this is not enough!

One good thing about having only ten observations is that it is easy to manually check for regression outliers. We do this by looking at the standardized residuals. We can find the standardized residuals by dividing the values of the residuals by the standard deviation of the residuals. Standardized residuals greater than 3 or less than -3 are likely outliers.

  • Instructions

    • Your residuals are in your script as resmod, and you can find the standard deviation of the residuals using the function sd(), which takes the object containing your residuals between brackets.
    • In your script, add a line of code to print the standardised resdiuals.
    • Inspect the output and see if you can see any standardized outliers over 3, or under -3.
  • Answers

    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Assign regression model to object "mod"
    mod <- lm(liking ~ smile + money)
      
    # Obtain the residuals from mod using $, assign to "resmod"
    resmod <- mod$residuals
    resmod
    # Print the standardized residuals
    sd<-sd(resmod)
    resmod/sd
    plot(resmod/sd)
    
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
    # Assign regression model to object "mod"
    mod <- lm(liking ~ smile + money)
    # Obtain the residuals from mod using $, assign to "resmod"
    resmod <- mod$residuals
    resmod
               1            2            3            4            5            6 
    -0.110680705 -0.460384427 -0.007994613 -0.868166014  3.633177032 -0.569666993 
               7            8            9           10 
    -2.317277179  0.586159403 -0.163544319  0.278377816 
    # Print the standardized residuals
    sd<-sd(resmod)
    resmod/sd
               1            2            3            4            5            6 
    -0.073659800 -0.306393286 -0.005320544 -0.577778530  2.417938104 -0.379122602 
               7            8            9           10 
    -1.542185459  0.390098567 -0.108841391  0.185264941 
    plot(resmod/sd)
    

8. Assumptions III– Residuals Normality

The last assumption we’re going to check is whether residuals are normally distributed. We can do this through plotting the residuals in a histogram using the function hist(). If residuals are normally distributed there should be the highest frequency around the middle value, and gradually less as you move away from this.

Let’s try it!

  • Instructions

    • Add a line of code to your script that produces a histogram of residuals using the function hist().
    • hist() takes a vector of values as its first (and in this case, only) argument.
  • Answers

    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Assign regression model to object "mod"
    mod <- lm(liking ~ smile + money)
      
    # Obtain the residuals from mod using $, assign to "resmod"
    resmod <- mod$residuals
      
    # Plot the residuals in a histogram using hist()
    hist(resmod)
    

9. Categorical Predictors I–as.factor(variable)

So far we have given participants money, smiled at them, but haven’t said anything - a bit creepy, right?

Let’s include a variable called talk. We either said something neutral (“Nice weather today”), rude (“You smell”), or polite (“You are great”). We coded these responses as 1, 2, and 3, respectively. We have to tell R that these values (1, 2, and 3) are categories and not numbers! We do this by making them into factors using the function as.factor(). For example: as.factor(variable). This tells R that ‘data’ is categorical.

Let’s make talk into a categorical variable, and add it to our regression analysis!

  • Instructions
    • In your script, add talk to your regression model as a predictor.
    • In your regression mode, use the function as.factor() on talk so that the model knows that the variable talk is a factor (category).
    • Hit “Submit” and look at the outcome!
  • Answer

    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
      
    # Vector containing what you said to participants (predictor)
    talk <- c(1, 2, 3, 2, 3, 1, 2, 1, 3, 1)
    as.factor(talk)
    # Add "talk" to your regression model as a factor predictor
    mod <- lm(liking ~ smile + money+as.factor(talk))
      
    # Obtain regression coefficients from "mod"
    summary(mod)
    
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Vector containing the amount the participants liked you (response)
    liking <- c(2.2, 2.8, 4.5, 3.1, 8.7, 5.0, 4.5, 8.8, 9.0, 9.2)
    # Vector containing what you said to participants (predictor)
    talk <- c(1, 2, 3, 2, 3, 1, 2, 1, 3, 1)
    as.factor(talk)
     [1] 1 2 3 2 3 1 2 1 3 1
    Levels: 1 2 3
    # Add "talk" to your regression model as a factor predictor
    mod <- lm(liking ~ smile + money+as.factor(talk))
    # Obtain regression coefficients from "mod"
    summary(mod)
      
    Call:
    lm(formula = liking ~ smile + money + as.factor(talk))
      
    Residuals:
          1       2       3       4       5       6       7       8       9      10 
    -0.3812  1.0200 -0.9840 -0.1779  1.7028 -1.1737 -0.8420  1.3264 -0.7188  0.2285 
      
    Coefficients:
                     Estimate Std. Error t value Pr(>|t|)   
    (Intercept)        1.9694     1.4816   1.329  0.24120   
    smile             -0.1523     1.5761  -0.097  0.92676   
    money              0.7033     0.1610   4.367  0.00724 **
    as.factor(talk)2  -1.4892     1.0999  -1.354  0.23373   #控制其中一个变量
    as.factor(talk)3   1.5572     1.1558   1.347  0.23571   
    ---
    Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
      
    Residual standard error: 1.378 on 5 degrees of freedom
    Multiple R-squared:  0.8689,	Adjusted R-squared:  0.7641 
    F-statistic: 8.286 on 4 and 5 DF,  p-value: 0.01973
    
  • Explained

    Enter summary(lm(liking ~ smile + money + talk)) into your console to see the output from your regression analysis that includes how much giving people money, smiling and saying neutral, rude or polite statments predicts how much people like you.

    You included the neutral/rude/polite statments in your model as the values 1, 2, and 3. Behind the scenes, R compares the regression slope for rude statements, to the regression slope for neutral statements (talk2), and compared the regression slope for polite statements to the regression slope for neutral statements (talk3). The output shows coefficient estimates talk2 and talk3.

10. Regression With a Categorical Response I–glm(response variable ~ predictor variable + predictor variable, family = binomial).

So we’ve seen what happens with a categorical predictor variable, what about a categorical response (response) variable?

Let’s say instead of a continuous measure of liking, we have a binary measure - either someone likes us (1), or they don’t (0).

We can see whether we can predict whether someone likes us or not using logistic regression.

While previously we have used the lm() function, for logistic regression we will use the glm() function, with the added argument family = binomial. This should look something like this: glm(response variable ~ predictor variable + predictor variable, family = binomial).

  • \[glm(formula,data,family)\]
    • formula是表示变量之间的关系的符号。

    • data是给出这些变量的值的数据集。

    • family是R语言对象来指定模型的细节。 它的值是二项逻辑回归。

    • family:每一种响应分布(指数分布族)允许各种关联函数将均值和线性预测器关联起来。

      • 常用的family:
        • binomial(link=’logit’) —-响应变量服从二项分布,连接函数为logit,即logistic回归
        • binomial(link=’probit’) —-响应变量服从二项分布,连接函数为probit
        • poisson(link=’identity’) —-响应变量服从泊松分布,即泊松回归
  • Instructions

    • In your script add a line of code that finds the logistic regression model for liking, predicted by smile and money.
    • Assign the outcome of this mode to object logmod.
    • Use the summary() function to find the regression coefficients for logmod.
    • ‘Submit’ and look at the output!
  • Answers

    # Vector containing the amount the participants liked you (response)
    liking <- as.factor(c(0, 0, 0, 0, 1, 1, 0, 1, 1, 1))
      
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
      
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
      
    # Apply the logistic regression model to our data using glm(), assign to logmod
    logmod <- glm(liking~smile+money,family=binomial(link="logit"))
      
    # Find the regression coefficients for our regression model using summary()
    summary(logmod)
    logfunction<-function(smile,money){
        return (exp(-4.5794-1.4755*smile+0.9312*money)/(1+exp(-4.5794-1.4755*smile+0.9312*money)))
    }
    plot(logfunction(smile,money))
    
      
    # Vector containing the amount the participants liked you (response)
    liking <- as.factor(c(0, 0, 0, 0, 1, 1, 0, 1, 1, 1))
    # Vector containing the amount of money you gave participants (predictor)
    money <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    # Vector containing how much you smiled (predictor)
    smile <- c(0.6, 0.7, 1.0, 0.1, 0.3, 0.1, 0.4, 0.8, 0.9, 0.2)
    # Apply the logistic regression model to our data using glm(), assign to logmod
    logmod <- glm(liking~smile+money,family=binomial(link="logit"))
    # Find the regression coefficients for our regression model using summary()
    summary(logmod)
      
    Call:
    glm(formula = liking ~ smile + money, family = binomial(link = "logit"))
      
    Deviance Residuals: 
         Min        1Q    Median        3Q       Max  
    -1.77741  -0.25964   0.00357   0.53708   1.33627  
      
    Coefficients:
                Estimate Std. Error z value Pr(>|z|)
    (Intercept)  -4.5794     3.2763  -1.398    0.162
    smile        -1.4755     3.6210  -0.407    0.684
    money         0.9312     0.5934   1.569    0.117
      
    (Dispersion parameter for binomial family taken to be 1)
      
        Null deviance: 13.863  on 9  degrees of freedom
    Residual deviance:  6.943  on 7  degrees of freedom
    AIC: 12.943
      
    Number of Fisher Scoring iterations: 6
    logfunction<-function(smile,money){
        return (exp(-4.5794-1.4755*smile+0.9312*money)/(1+exp(-4.5794-1.4755*smile+0.9312*money)))
    }
    plot(logfunction(smile,money))