Prev: Q10, Next: Q11

# Quiz Questions


📗 Question 1



📗 Question 2





📗 Question 3



📗 Question 4





📗 Question 5



📗 Question 6





📗 Question 7



📗 Question 8





📗 Question 9



📗 Question 10




📗 End of Quiz
:
 ***** ***** ***** ***** ***** 

 ***** ***** ***** ***** *****

📗 [1 points] (SU23FQ29, S22FQ17, F22FQ12) df has 5 columns and 10 rows. After applying p = PCA(3) and p.fit(df), what is the shape of p.components_? Note: the rows of p.components_ are the principal components.
(3, 5)
(5, 3)
(3, 10)
(10, 3)
📗 [1 points] (SU23FQ28, F21FQ3) The following is the explained_variance_ratio_ of a PCA model: array([0.4, 0.3, 0.2, 0.1]). How many components (at least) do we need to explain 80 percent (or more) of the variance of the original data? 
3
2
1
4
📗 [1 points] (SU23FQ27, S23FQ6, F21FQ20) Given points [1, 2, 3, 4] and starting centroids [0] and [5], what are the centroids after the first iteration of assigning points and updating centroids, using the iterative K-Means Clustering algorithm with Manhattan distance?
[1.5, 3.5]
[0, 5]
[2, 4]
[1, 3]
📗 [1 points] (SU23FQ26, F22FQ16, F21FQ9) Which of the following is the best for K Means algorithm?
small inertia, few clusters
large inertia, few clusters
small inertia, many clusters
large inertia, many clusters
📗 [1 points] (SU23FQ25, S22FQ4, F21FQ15) Which of the following machine learning algorithm will produce a dendrogram?
AgglomerativeClustering
PCA
KMeans
LogisticRegression
📗 [1 points] (S23FQ19) Which of the following does NOT describe a dendrogram? (A dendrogram is not always a ...?)
balanced tree
binary tree
directed graph
acyclic graph
📗 [1 points] (S23FQ1) Consider the following code for PCA, which of the following approximately reconstructs the original dataframe df using the first three components? p = PCA() then W = p.fit_transform(df) and C = p.components_
W[:, :3] @ C[:3, :] + p.mean_
W[:, :3] @ C[:, :3] + p.mean_
W[:3, :] @ C[:3, :] + p.mean_
W[3:, :] @ C[:, :3] + p.mean_
📗 [1 points] (S23FQ15, S22FQ19, F21FQ18) Which of the following machine learning algorithms enables us to predict a number?
LinearRegression
KMeans
SVC (support vector machine)
PCA
📗 [1 points] (new) The gradient vector dw at [w1, w2, w3, w4] = [1, -1, 2, -2] is [-2, 2, -1, 1], if gradient descent w = w - alpha * dw is used, which variable will increase by the largest amount in the next iteration?
w1
w2
w3
w4
📗 [1 points] (new) If the linear program max 2 w1 - w2 subject to w1 - w2 <= 1 and w1 + w2 >= 0 with w1, w2 >= 0 is written in the standard form max c * x subject A x <= b and x >= 0, what is the matrix A? Assume c = [2, -1] and b = [1, 0].
[[1, -1], [-1, -1]]
[[1, -1], [1, 1]]
[[-1, 1], [-1, -1]]
[[-1, 1], [1, 1]]





Last Updated: April 29, 2024 at 1:10 AM