Prev: Q9, Next: Q11

# Quiz Questions


📗 Question 1



📗 Question 2





📗 Question 3



📗 Question 4





📗 Question 5



📗 Question 6





📗 Question 7



📗 Question 8





📗 Question 9



📗 Question 10




📗 End of Quiz
:
 ***** ***** ***** ***** ***** 

 ***** ***** ***** ***** *****

📗 [1 points] (SU23FQ23, F21FQ24) Which of the following is the equivalent to the third line in the code snippet? Note: fit_intercept = False means the bias b is set to 0, and expit from scipy.special.expit is the logistic (sigmoid) activation function.

model = LogisticRegression(fit_intercept = False)
model.fit(train[xcols], train[ycol])
pred_y = model.predict(test[xcols])
X = test[xcols].values
c = model.coef_.reshape(-1, 1)
pred_y = expit(X @ c) > 0.5
pred_y = expit(X @ c)
pred_y = X @ c > 0.5
pred_y = X @ c
📗 [1 points] (SU23FQ20, S23FQ4, F21FQ19) Assume columns c1 is a categorical column containing 4 categories, and c2 is a numerical column. How many columns will be produced after we apply the following custom_transformer?

custom_transformer = make_column_transformer(
    (OneHotEncoder(), ["c1"]),
    (PolynomialFeatures(degree = 2, include_bias = False), ["c2"]),
)
6 
5
3
2
📗 [1 points] (SU23FQ18, S23FQ3, F21FQ6) When we compare two models using the same train sets by computing the cross validation scores, which of the following characteristics indicate a better model?
large mean, small variance
large mean, large variance
small mean, small variance
small mean, large variance
📗 [1 points] (S23FQ26, S23FQ14, S22FQ21, F22FQ5) If A = numpy.array([[1, 0], [0, 1]]) and b = numpy.array([[2], [3]]), what is A @ b?
numpy.array([[2], [3]])
numpy.array([2, 3])
numpy.array([[2, 0], [0, 3]])
numpy.array([[2, 2], [3, 3]])
📗 [1 points] (S22FQ15) What is a valid simplification of X @ numpy.linalg.solve(X, y), assuming the code runs without error (and numerical instability)?
y
X
X @ y
y @ X
📗 [1 points] (S22FQ10, F22FQ28, F21FQ10) The shape of A is (2, 3), the shape of B is (3, 3), and the shape of C is (3, 4). What is the shape of A @ B @ C?

(2, 4)
(3, 3)
(4, 2)
(Error)
📗 [1 points] (S22FQ7) What call makes predictions using a computation similar to X @ c, where X is the design matrix and c is the coefficient vector. 
LinearRegression.predict
LinearRegression.predict_proba
LogisticRegression.predict
LogisticRegression.predict_proba
📗 [1 points] (new) In a neural network sklearn.neural_network.MLPClassifier(hidden_layer_sizes = [3, 4]) with 2 input features and used for binary classifications, how many weights and biases does the network has?
6 + 12 + 4 weights, 3 + 4 + 1 biases
2 + 3 + 4 weights, 3 + 4 + 1 biases
2 + 6 + 12 weights, 2 + 3 + 4 biases
3 + 4 + 1 weights, 2 + 3 + 4 biases
📗 [1 points] (new) If x0 has three columns, and x = sklearn.preprocessing.PolynomialFeatures(2).fit_transform(x0) is used as the design matrix, how many weights (include coefficients and biases) will a linear regression estimate?
10
8
7
3
📗 [1 points] (new) Suppose the R-squared score of a linear regression with 4 features is 0.9, and the coefficients and scores after dropping each feature is summarized in the following table. If one feature is dropped based on the R-squared score, which feature should be dropped?
Feature Coefficient Score if Dropped
1 1 0.6
2 10 0.8
3 -10 0.7
4 5 0.5

2
1
3
4





Last Updated: April 29, 2024 at 1:10 AM