# Epic Section Final - Online

📗 Enter your ID (the wisc email ID without @wisc.edu) here: and click (or hit enter key)
📗 You could print the page: and solve the problems on paper or annotate on the PDF file. You can also write your answers on blank papers or files. To get full points, you have to state your final answers clearly and provide explanations how you obtained the answers.
📗 Please submit the file (scanned or annotated) on Canvas to Assignment X1 before the end of the exam.

# Warning: please enter your ID before you start!




# Epic Section Final - In Person


📗 Name: ____________________

📗 Wisc ID: ____________________

📗 Please state your final answers clearly. You do not have to evaluate mathematical expressions. You do not have to fit your answers into the answer text boxes.



# Question 1





# Question 2





# Question 3





# Question 4





# Question 5





# Question 6





# Question 7





# Question 8





# Question 9





# Question 10





# Question 11





# Question 12





# Question 13





# Question 14





# Question 15





# Question 16





# Question 17





# Question 18





# Question 19





# Question 20





# Question 21





# Question 22





# Question 23





# Question 24





# Question 25





# Question 26





# Question 27





# Question 28





# Question 29





# Question 30






# Blank Page


📗 [4 points] What is the convolution between the image and the filter using zero padding? Remember to flip the filter first.
📗 Answer (matrix with multiple lines, each line is a comma separated vector): .
📗 [4 points] In a convolutional neural network, suppose the activation map of a convolution layer is . What is the activation map after a non-overlapping (stride 2) 2 by 2 max-pooling layer?
📗 Answer (matrix with multiple lines, each line is a comma separated vector): .
📗 [4 points] A convolutional neural network has input image of size x that is connected to a convolutional layer that uses a x filter, zero padding of the image, and a stride of 1. There are activation maps. (Here, zero-padding implies that these activation maps have the same size as the input images.) The convolutional layer is then connected to a pooling layer that uses x max pooling, a stride of (non-overlapping, no padding) of the convolutional layer. The pooling layer is then fully connected to an output layer that contains output units. There are no hidden layers between the pooling layer and the output layer. How many different weights must be learned in this whole network, not including any bias.
📗 Answer: .
📗 [4 points] Suppose the states are integers between and . The initial state is , and the goal state is . The successors of a state \(i\) are \(2 i\) and \(2 i + 1\), if exist. How many states are expanded using a Breadth First Search? Include both the initial and goal states.
📗 Note: use the convention used in the lectures, enqueue the states with smaller index into the queue first.
📗 Answer: .
📗 [4 points] Suppose the states are integers between and . The initial state is , and the goal state is . The successors of a state \(i\) are \(2 i\) and \(2 i + 1\), if exist. How many states are expanded using a Depth First Search? Include both the initial and goal states.
📗 Note: use the convention used in the lectures, push the states with larger index into the stack first (i.e. expand the states with the smaller index first).
📗 Answer: .
📗 [2 points] Consider a search graph which is a tree, and each internal node has children. The only goal node is at depth (root is depth 0). How many total goal-checks will be performed by in the luckiest case (i.e. the smallest number of goal-checks)? If a node is checked multiple times you should count that multiple times.
📗 Answer:
📗 [2 points] Consider a search graph which is a tree, and each internal node has children. The only goal node is at depth (root is depth 0). How many total goal-checks will be performed by in the luckiest case (i.e. the smallest number of goal-checks)? If a node is checked multiple times you should count that multiple times.
📗 Answer:
📗 [2 points] Consider \(n + 1\) = + \(1\) states. The initial state is \(1\), the goal state is \(n\). State \(0\) is a dead-end state with no successors. For each non-\(0\) state \(i\), it has two successors: \(i + 1\) and \(0\). We may expand the same states many times, because we do not keep track of which states are checked previously. How many states (including repeated ones) will be expanded by ? Break ties by expanding the state with the index first.
📗 Note: the tie-breaking rule may be different from the convention used during the lectures, please read the question carefully.
📗 Answer: .
📗 [2 points] Consider \(n + 1\) = + \(1\) states. The initial state is \(1\), the goal state is \(n\). State \(0\) is a dead-end state with no successors. For each non-\(0\) state \(i\), it has two successors: \(i + 1\) and \(0\). We may expand the same states many times, because we do not keep track of which states are checked previously. How many states (including repeated ones) will be expanded by ? Break ties by expanding the state with the index first.
📗 Note: the tie-breaking rule may be different from the convention used during the lectures, please read the question carefully.
📗 Answer: .
📗 [2 points] Consider \(n + 1\) = + \(1\) states. The initial state is \(1\), the goal state is \(n\). State \(0\) is a dead-end state with no successors. For each non-\(0\) state \(i\), it has two successors: \(i + 1\) and \(0\). We may expand the same states many times, because we do not keep track of which states are checked previously. How many states (including repeated ones) will be expanded by ? Break ties by expanding the state with the index first.
📗 Note: the tie-breaking rule may be different from the convention used during the lectures, please read the question carefully.
📗 Answer: .
📗 [3 points] Suppose the initial state is \(S\) and goal state is \(G\). What is the smallest integer value of the heuristic at state \(1\) such that when A search (A* without the star) is used on the following graph and it does not find the optimal solution. In case of tie, expand the state with a larger index (i.e. \(2\) before \(1\)).
📗 In case the diagram is not clear, the edge costs are

📗 Answer: .
📗 [4 points] Run search algorithm on the following graph, starting from state 0 with the goal state being . Write down the expansion path (in the order of the states expanded). The heuristic function \(h\) is shown as subscripts. Break tie by expanding the state with a smaller index.

📗 In case the diagram is not clear: the weights are (with heuristic values on the diagonal entries): .
📗 Answer (comma separated vector): .
📗 [4 points] Run search algorithm on the following graph, starting from state 0 with the goal state being . Write down the expansion path (in the order of the states expanded). The heuristic function \(h\) is shown as subscripts. Break tie by expanding the state with a smaller index.

📗 In case the diagram is not clear: the weights are (with heuristic values on the diagonal entries): .
📗 Answer (comma separated vector): .
📗 [3 points] Let \(h_{1}\) be an admissible heuristic from a state to the optimal goal, A* search with which ones of the following \(h\) will be admissible? Enter the correct choices as a list, comma separated, without parentheses, for example, "1, 2, 4".
📗 Choices:
(1)
(2)
(3)
(4)
(5)
(6)
(7) None of the above
📗 Answer (comma separated vector): .
📗 [3 points] Let \(h_{1}\) be an admissible heuristic from a state to the optimal goal, A* search with which ones of the following \(h\) will be admissible? Enter the correct choices as a list, comma separated, without parentheses, for example, "1, 2, 4".
📗 Choices:
(1)
(2)
(3)
(4)
(5)
(6)
(7) None of the above
📗 Answer (comma separated vector): .
📗 [2 points] In simulated annealing we move from \(s\) to an inferior neighbor \(t\) with probability \(\exp\left(\dfrac{- \left| f\left(s\right) - f\left(t\right) \right|}{T}\right)\), where \(T\) is the temperature parameter. Suppose \(f\left(s\right)\) = and \(f\left(t\right)\) = and \(T\) = . What is the probability we stay at \(s\) instead of moving to \(t\)?
📗 Note: we are minimizing the score.
📗 Answer: .
📗 [4 points] Let the states be 3D integer points with integer coordinates \(\left(i, j, k\right)\) with boundary constrains and and . Each state \(\left(i, j, k\right)\) has six successors \(\left(i - 1, j, k\right), \left(i + 1, j, k\right), \left(i, j - 1, k\right), \left(i, j + 1, k\right), \left(i, j, k - 1\right), \left(i, j, k + 1\right)\) or a subset thereof subject to the boundary constraints. The score of state \(\left(i, j, k\right)\) is . Which local minimum will be reached if hill climbing is used starting from ? Enter the state, not the score.
📗 Answer (comma separated vector): .
📗 [4 points] When using the Genetic Algorithm, suppose the states are \(\begin{bmatrix} x_{1} & x_{2} & ... & x_{T} \end{bmatrix}\) = , , , . Let \(T\) = , the fitness function (not the cost) is \(\mathop{\mathrm{argmax}}_{t \in \left\{0, ..., T\right\}} x_{t} = 1\) with \(x_{0} = 1\) (i.e. the index of the last feature that is 1). What is the reproduction probability of the state with the highest reproduction probability?
📗 Answer: .
📗 [4 points] When using the Genetic Algorithm, suppose the states are \(\begin{bmatrix} x_{1} & x_{2} & ... & x_{T} \end{bmatrix}\) = , , , . Let \(T\) = , the fitness function (not the cost) is \(\mathop{\mathrm{argmin}}_{t \in \left\{1, ..., T + 1\right\}} x_{t} = 1\) with \(x_{T + 1} = 1\) (i.e. the index of the first feature that is 1). What is the reproduction probability of the state with the highest reproduction probability?
📗 Answer: .
📗 [3 points] Suppose the UCB1 (Upper Confidence Bound) Algorithm is used to select arms in a multi-armed bandit problem, and in round \(t\) = , the arms pulls and empirical means \(\hat{\mu}\) for the arms are summarized in the following table, and in period \(t + 1\), an arm is pulled according to the UCB1 Algorithm and the reward is . Compute the updated empirical means of the arms after period \(t + 1\), i.e. updated \(\hat{\mu}_{1}, \hat{\mu}_{2}, ...\). Use \(c\) = .
Arms arm pulls (\(n_{k}\)) empirical means \(\hat{\mu}_{k}\) upper confidence bounds \(\hat{\mu}_{k} + c \sqrt{2 \dfrac{\log t}{n_{k}}}\)
\(k = 1\)
\(k = 2\)
\(k = 3\)

📗 Answer (comma separated vector): .
📗 [4 points] Consider the following Markov Decision Process. It has two states \(s\), A and B. It has two actions \(a\): move and stay. The state transition is deterministic: "move" moves to the other state, while "stay" stays at the current state. The reward \(r\) is for move (from A and B), for stay (in A and B). Suppose the discount rate is \(\beta\) = .

Find the Q table \(Q_{i}\) after \(i\) = updates of every entry using Q value iteration (\(i = 0\) initializes all values to \(0\)) in the format described by the following table. Enter a two by two matrix.
State \ Action stay move
A ? ?
B ? ?

📗 Answer (matrix with multiple lines, each line is a comma separated vector): .
📗 [3 points] In an infinite horizon MDP (Markov Decision Process), there are \(n\) = states: initial state \(s_{0}\), and absorbing states \(s_{1}, s_{2}, ..., s_{n-1}\). In state \(s_{0}\), the agent can stay or move to any other state, but in all other absorbing states the agent can only choose to stay. The reward from staying in those states are summarized in the following table. Compute the Q value (under the optimal policy, not from Q learning) \(Q\left(s_{0}, \text{stay}\right)\). Use the discount factor \(\gamma\) = .
State \(s_{0}\) \(s_{1}\) \(s_{2}\) \(s_{3}\) \(s_{4}\)
Reward from stay
Reward from move - - - -

📗 Answer: .
📗 [4 points] Consider a zero-sum sequential move game with Chance. player moves first, then Chance, then . The values of the terminal states are shown in the diagram (they are the values for the Max player). What is the (expected) value of the game (for the Max player)?

📗 Note: in case the diagram is not clear, the probabilities from left to right is: , and the rewards are .
📗 Answer: .
📗 [4 points] Enter the smallest integer value of \(A\) such that \(B\) will be alpha-beta pruned? Max player moves first. In the case alpha = beta, prune the node. Enter -100 if you think the answer is negative infinity.

📗 Answer: .
📗 [4 points] Enter the largest integer value of \(A\) such that \(B\) will be alpha-beta pruned? Min player moves first. In the case alpha = beta, prune the node. Enter 100 if you think the answer is infinity.

📗 Answer: .
📗 [3 points] Perform iterated elimination of strictly dominated strategies. Player A's strategies are the rows. The two numbers are (A, B)'s payoffs, respectively. Recall each player wants to maximize their own payoff. Enter the payoff pair that survives the process (i.e. payoffs from rationalizable actions). There should be only one such pair.
A \ B I II III
I
II
III

📗 Answer (comma separated vector): .
📗 [2 points] What is the row player's value in a Nash equilibrium of the following zero-sum normal form game? A (row) is the max player, B (col) is the min player. If there are multiple Nash equilibria, use the one with the largest value (to the max player).
A \ B I II III IV
I
II
III

📗 Answer: .
📗 [2 points] What is the row player's value in a Nash equilibrium of the following zero-sum normal form game? A (row) is the max player, B (col) is the min player. If there are multiple Nash equilibria, use the one with the largest value (to the max player).
A \ B I II III
I
II
III
IV

📗 Answer: .
📗 [3 points] Consider the standard PD (Prisoner's Dilemma) game in the following table with two prisoners that belong to the same criminal organization, and the criminal organization punishes whoever confesses which decrease the prisoner's value by \(x\). What is the smallest value of \(x\) so that (deny, deny) is a Nash equilibrium?
A \ B Deny Confess
Deny
Confess

📗 Answer: .
📗 [1 points] Please enter any comments including possible mistakes and bugs with the questions or your answers. If you have no comments, please enter "None": do not leave it blank.
📗 Answer: .

# Grade


 * * * *

 * * * * *

# Submission


📗 Please do not modify the content in the above text field: use the "Grade" button to update.


📗 You could save the text in the above text box to a file using the button or copy and paste it into a file yourself .
📗 You could load your answers from the text (or txt file) in the text box below using the button . The first two lines should be "##m: 2" and "##id: your id", and the format of the remaining lines should be "##1: your answer to question 1" newline "##2: your answer to question 2", etc. Please make sure that your answers are loaded correctly before submitting them.







Last Updated: November 30, 2024 at 4:35 AM