# Multiple Choice (Preview) ## Taking ADMS 3330 at the IBM Markham location?

Send me an email at Jason@StatsDoesntSuck.com and let me know which day / time you attend – I have a special offer specific to your section.

Lesson tags: ADMS 3330, Chapter 16 (), Freebie (), Simple linear regression ()
Back to: Chapter 16: Simple linear regression and correlation

## 24 thoughts on “Multiple Choice (Preview)”

1. Hi Jason, I guess you did not solve question that asks:
If an estimated regression line has a Y-intercept of 10 and a slope of 4, then when x=2 the actual value of y is?

Can you please post a video for this MCQ as well. It is in the PDF folder you have posted.

2. Hi Jason!

For question 2, I thought the formula for residuals was the other way around.

Meaning, you take the actual y and subtract it from the expected y (y hat).

Wouldn’t that flip all the answers to be negative?

Also, how do you know if you need to first sum up all the residuals and then square the answer, or square each answer and then sum?

Thanks!

1. Follow up question – for question 21, could you not have a perfect relationship that is negative? So for every increase in x, there is a decrease in y. If so, wouldn’t r be -1 rather than 1, and Rsquared would remain the same.

Is that possible?

3. Hi Jason

For questions #10 The First-Order Linear model, the equation for it is y=B0+B1x+E. Hence wouldnt the answer be B0 & B1 instead of b0 & b1 because isnt this formula used for the Least square line, which is y=b0+b1.

Thanks

1. You would be correct if we were looking for the population parameters, but we are asked instead to identify what ESTIMATES the population parameters – The population parameters are ESTIMATED BY the sample statistics.

β0 is estimated by b0
β1 is estimated by b1

4. Hi Jason,

For m/c question 18, when we look up the t value, do we always have to divide the alpha by 2?

5. hey
why is there a difference is Multiple choice 13 and 10
I dont get it
the question is the same for both

1. Actually – There is a very small, but important difference between these two questions…

#10) In the first order linear regression model, the population parameters of the y-intercept and the slope are estimated respectively, by:

#13) In the first-order linear regression model, the population parameters of the y-intercept and the slope are, respectively,

In question #10 we are not looking for the actual parameters of slope and y-intercept in the population – only the coefficients that estimate them.

6. I don’t understand MC 16.1

If the correct formula is e = y – yhat then the answer would be different than if the formula is like the way you showed int he video which is e= yhat – y

1. ah nervermind, I see you corrected the error in 16.8

In 16.1 I see it doesn’t make a difference because its squared and I was actually putting in the incorrect values ( I was putting a y value into x by accident for one of them) ‘

7. Jason how to calculate : If the standard error of estimate = 20 and n =10 then sum of squares of error [SSE] is ?

what will be the answer ?

1. The formula that contains all three values mentioned is
SE = √(SSE/n-2)

Fill in the given values…
20 = √(SSE/(10-2))

Square both sides (to cancel out the √ sign)…
400 = (SSE/8)

Multiply both sides by 8 (to isolate SSE)…
3,200 = SSE

Therefore SSE = 3,200

8. for question number 10, its asking for the population parameters not the sample parameters so why would the answer be the sample parameters?

1. because the population parameters are ESTIMATED BY the sample statistics.

β0 is estimated by b0
β1 is estimated by b1

9. Hey Jason a quick quick question I still dont know how to calculate p value any idea 🙁

1. This was taught in Chapter 11 of ADMS 2320. Here is my video tutorial from that chapter. The type of question is a bit different, but the way you calculate p-values is the same:

You can ONLY calculate p-values manually for z-tests. For Chapter 16 t-tests the p-values MUST be given.
1. Thanks Jason … I need to say many thanks you know why because of you I learnt many stuffs !

10. Jason,
This is another similar question, please.
The sample correlation coefficient between x and y is 0.375. It has been found out that the p – value is 0.256 when testing against a two-sided alternative . To test against the one-sided alternative at a significance level of 0.193, the p – value will be equal to
a. 0.128
b. 0.512
c. 0.744
d. 0.872
Thanks.

1. The correlation coefficient of 0.375 and the significance level of 0.193 are both irrelevant. All that we need to know is the one-sided p-value is HALF the p-value when testing two-sided. So the answer is:

One-sided p-value = (0.5) x (0.256) = 0.128 < < So (a) is the answer…

#### HOWEVER – Depending on the sign used in the alternative hypothesis, the answer COULD be (d) if testing to see if the population correlation is negative. The question is not worded properly. Here is the original question with the correct information:

The sample correlation coefficient between x and y is 0.375. It has been found out that the p-value is 0.256 when testing H0: ρ = 0 against the two-sided alternative H1: ρ ≠ 0. To test H0: ρ = 0 against the one-sided alternative H1: ρ > 0 at a significant level of 0.193, the p-value will be equal to
a. 0.128
b. 0.512
c. 0.744
d. 0.872

11. Hi Jason,

Could you please, explain this question;
The sample correlation coefficient between x and y is 0.375. It has been found out that the p– value is 0.744 when testing against the one-sided alternative . To test the against the two-sided alternative at a significance level of 0.193, the p – value is
a. 0.372
b. 1.488
c. 0.256
d. 0.512

1. Hi,

The correlation coefficient is irrelevant. All that we need to know if the p-value when two-sided is DOUBLE the p-value when testing one-sided. So the answer is:

Two-sided p-value = (2) x (0.744) = 1.488 …. < <

#### BUT THIS IS WRONG! P-values cannot be greater than 1.0. So…

For p-values that are greater than 0.5, we need to double just the tail, so we would use;

Two-sided p-value = (2) x (1-0.744) = 0.512 …. < < So (d) is the correct answer.

12. hi Jason, in MC question 17, what is sum of squares for regression?

1. The Sum of Squares Regression (SSR) measures the amount of variation in your dependent variable (Y) that can be predicted (or explained) by the independent variable.

For example, if you are studying the relationship between the amount of ice cream you eat every day (X) and your weight (Y), then SSR will measure how much of your fluctuating weight can be blamed on the ice cream that you either ate or did not eat… But certainly your weight does not depend on ice cream eating alone! You may also eat other foods that contribute and gain even more, or perhaps you exercise for 2 hours after every bowl you eat. The Sum of Squares Error (SSE) measures all of the variation in your weight that doesn’t appear to be related to ice cream eating.

Hope that makes sense – I’m going to get some ice cream now