Prev: W13 Next: W14

# Summary

📗 Monday lecture: 5:30 to 8:30, Zoom Link
📗 Office hours: 5:30 to 8:30 Wednesdays (Dune) and Thursdays (Zoom Link)
📗 Personal meeting room: always open, Zoom Link
📗 Quiz (use your wisc ID to log in (without "@wisc.edu")): Socrative Link, Regrade request form: Google Form (select Q7).
📗 Math Homework: M8, M9, M10, M11,
📗 Programming Homework: P4, P5,
📗 Examples, Quizzes, Discussions: Q7, Q8, Q9, Q10, Q11, Q12,

# Lectures


📗 Notes
N/A

# Midterm Statistics

Exam FA: Mean = 83.31%, Stdev = 9.28
Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
MAX 3 3 4 4 4 3 3 3 3 4 3 3 4 4 4 2 4 4 4 4 2 4 4 3 4 4 4 3 8.54 1
PROB 0.35 0.19 0.68 0.39 0.10 0.94 1 0.81 0.77 0.90 0.84 0.97 0.97 0.77 0.71 0.97 0.74 0.87 1 0.81 0.71 0.97 0.90 0.81 0.77 0.84 0.84 1 1 1
RPBI 1.71 0.77 4.25 2.53 0.38 2.31 0 3.21 3.26 3.58 3.11 1.72 2.30 4.35 4.32 1.15 4.36 3.92 0 4.28 2.16 2.30 3.58 3.21 4.35 4.14 3.98 0 0 0


Exam FC
Too few students took the exam to compute the statistics.

PROB is the percentage of students who get this question correct.
RPBI is the (point biserial) correlation between getting this question correct with the total grade on the exam.
PROB < 0.25 or RPBI < 0 means this question is probably not well-made.

# Summary

📗 Coverage: unsupervised learning + search + game theory W9 to W12.
📗 Number of questions: 30
📗 Length: 3 hours
📗 Regular: August 22, 5:30 to 8:30 PM
📗 Make-up: August 24, 5:30 to 8:30 PM
📗 Link to relevant pages:
W8 : M8
W9 : M9
W10 : M10
W11 : M11
Practice: X3 and X4 and X5 and X7

# Details

📗 Slides:
(1) The slides subtitled "Definition" and "Quiz" contain the mathematics and statistics that you are required to know for the exams.
(2) The slides subtitled "Motivation" and "Discussion" contain concepts you should be familiar with, but the specific mathematics will not be tested on the exam.
(3) The slides subtitled "Description" and "Algorithm" are mostly useful for programming homework, not exams.
(4) The slides subtitled "Admin" are not relevant to the course materials.
📗 Questions:
(1) Around a third of the questions will be exactly the same as the homework questions (different randomization of parameters), you can practice by solving these homework again with someone else's ID (auto-grading will not work if you do not enter an ID).
(2) Around a third of the questions will be similar to the past exam or quiz questions (ones that are covered during the lectures), going over the quiz questions will help, and solving the past exam questions will help.
(3) Around a third of the questions will be new, mostly from topics not covered in the homework, reading the slides will be helpful.
📗 Question types:
All questions will ask you to enter a number, vector (or list of options), or matrix. There will be no drawing or selecting objects on a canvas, and no text entry or essay questions. You will not get the hints like the ones in the homework. You can type your answers in a text file directly and submit it on Canvas. If you use the website, you can use the "calculate" button to make sure the expression you entered can be evaluated correctly when graded. You will receive 0 for incorrect answers and not-evaluate-able expressions, no partial marks, and no additional penalty for incorrect answers.

# Other Materials

📗 Pre-recorded Videos from 2020
No Lecture

📗 Relevant websites
2022 Online Exams:
F1A-C Permutations: Link
F2A-C Permutations: Link
FB-C Permutations: Link
FA-E Permutations: Link
FB-E Permutations: Link

2021 Online Exams:
F1A-C Permutations: Link
F1B-C Permutations: Link
F2A-C Permutations: Link
F2B-C Permutations: Link

2020 Online Exams:
F1A-C Permutations: Link
F1B-C Permutations: Link
F2A-C Permutations: Link
F2B-C Permutations: Link
F1A-E Permutations: Link
F1B-E Permutations: Link
F2A-E Permutations: Link
F2B-E Permutations: Link

2019 In-person Exams:
Final Version A: File
Version A Answers: CECBC DBBBA BEEDD BCACB CBEED DDCDC ACBCC ECABC
Final Version B: File
Version B Answers: EEAEE AEACE BBDED BDAAA DCEEA CDACA AEAAA CCABB
Sample final: Link
Video going through sample final very quickly: Link

Past exam other professors made:
Professor Zhu: Link
Professor Dyer: Link
Relevant questions:
Midterms: F18Q1,2,3,4,5,6,7,8,9,10,11,12,13,14; F17Q1,2,3,4,5,6,7,8,9,10,11,12,13; F16Q1,2,3,4,5,6,7,8,9,10; F14Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20; F11Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,18,20; F10Q1,2,3,4; F09Q1,3,4,5,6; F08Q1,2,3,5; F06Q1,2,3,4,5,6,7,8,9,10,11,12; F05Q1,2,3,4,5,6,7,8,9,10,11,14,19,20; F19Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32; S18Q1,2,3,4,5,6,7,8,9; S17Q1,2,3,4,5,6,7,8
Final Exams: F17Q1,2,3,4,5,6,7,10,11,12,13,14,15,17,18,19,20,21,22,23,24,25; F16Q1,2,3,4,5,6,7,8,9,10,11,13,14,15,17,18; F14Q1,2,3,4,5,9,10,13,14,15,16,17,19,20; F13Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20; F12Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,20; F10Q1,2,3,4,5,6,10,11,12,13,14,15,16,17,18,19,20; F09Q1,2,3,4,5,6,7,8,10,11,12,13,17,19,20; F08Q1,2,3,4,5,6,7; F06Q1,2,3,4,5,6,10,11,13,14,15,16,17,18,19,20; F05Q1,2,3,4,5,6,10,11,13,14,15,16,17,18,19,20; F19Q6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32; S18Q3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33; S17Q2,3,4,5,6,7,8,9,10


📗 YouTube videos from 2019 to 2021
L17Q1: Link
L17Q2: Link
M8Q6: Link
M8Q8: Link
M9Q2: Link
M9Q7: Link
M10Q1: Link
M10Q4: Link
M10Q5Q6: Link
M12Q1: Link
M12Q5: Link
M12Q7: Link
From Lectures
L16Q1 (UCS): Link
L17Q1 (Hill-climbing SAT): Link
L17Q2 (Genetic Algorithm): Link
Other Final Exam Questions
Q4 (Shape of Quickest IDS): Link
Q5 (Pure against Mixed): Link
Q6 (Switch Lights): Link
Q7 (K Means Cluster Assignment): Link
Q8 (Vaccination Game): Link



# Keywords and Notations

📗 Clustering
📗 Single Linkage: \(d\left(C_{k}, C_{k'}\right) = \displaystyle\min\left\{d\left(x_{i}, x_{i'}\right) : x_{i} \in C_{k}, x_{i'} \in C_{k'}\right\}\), where \(C_{k}, C_{k'}\) are two clusters (set of points), \(d\) is the distance function.
📗 Complete Linkage: \(d\left(C_{k}, C_{k'}\right) = \displaystyle\max\left\{d\left(x_{i}, x_{i'}\right) : x_{i} \in C_{k}, x_{i'} \in C_{k'}\right\}\).
📗 Average Linkage: \(d\left(C_{k}, C_{k'}\right) = \dfrac{1}{\left| C_{k} \right| \left| C_{k'} \right|} \displaystyle\sum_{x_{i} \in C_{k}, x_{i'} \in C_{k'}} d\left(x_{i}, x_{i'}\right)\), where \(\left| C_{k} \right|, \left| C_{k'} \right|\) are the number of the points in the clusters.
📗 Distortion (Euclidean distance): \(D_{K} = \displaystyle\sum_{i=1}^{n} d\left(x_{i}, c_{k^\star\left(x_{i}\right)}\left(x_{i}\right)\right)^{2}\), \(k^\star\left(x\right) = \mathop{\mathrm{argmin}}_{k = 1, 2, ..., K} d\left(x, c_{k}\right)\), where \(k^\star\left(x\right)\) is the cluster \(x\) belongs to.
📗 K-Means Gradient Descent Step: \(c_{k} = \dfrac{1}{\left| C_{k} \right|} \displaystyle\sum_{x \in C_{k}} x\).

📗 Projection: \(\text{proj} _{u_{k}} x_{i} = \left(\dfrac{u_{k^\top} x_{i}}{u_{k^\top} u_{k}}\right) u_{k}\) with length \(\left\|\text{proj} _{u_{k}} x_{i}\right\|_{2} = \left(\dfrac{u_{k^\top} x_{i}}{u_{k^\top} u_{k}}\right)\), where \(u_{k}\) is a principal direction.
📗 Projected Variance (Scalar form, MLE): \(V = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} \left(u_{k^\top} x_{i} - \mu_{k}\right)^{2}\) such that \(u_{k^\top} u_{k} = 1\), where \(\mu_{k} = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} u_{k^\top} x_{i}\).
📗 Projected Variance (Matrix form, MLE): \(V = u_{k^\top} \hat{\Sigma} u_{k}\) such that \(u_{k^\top} u_{k} = 1\), where \(\hat{\Sigma}\) is the convariance matrix of the data: \(\hat{\Sigma} = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} \left(x_{i} - \hat{\mu}\right)\left(x_{i} - \hat{\mu}\right)^\top\), \(\hat{\mu} = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} x_{i}\).
📗 New Feature: \(\left(u_{1^\top} x_{i}, u_{2^\top} x_{i}, ..., u_{K^\top} x_{i}\right)^\top\).
📗 Reconstruction: \(x_{i} = \displaystyle\sum_{i=1}^{m} \left(u_{k^\top} x_{i}\right) u_{k} \approx \displaystyle\sum_{i=1}^{K} \left(u_{k^\top} x_{i}\right) u_{k}\) with \(u_{k^\top} u_{k} = 1\).

📗 Uninformed Search
📗 Breadth First Search (Time Complexity): \(T = 1 + b + b^{2} + ... + b^{d}\), where \(b\) is the branching factor (number of children per node) and \(d\) is the depth of the goal state.
📗 Breadth First Search (Space Complexity): \(S = b^{d}\).
📗 Depth First Search (Time Complexity): \(T = b^{D-d+1} + ... + b^{D-1} + b^{D}\), where \(D\) is the depth of the leafs.
📗 Depth First Search (Space Complexity): \(S = \left(b - 1\right) D + 1\).
📗 Iterative Deepening Search (Time Complexity): \(T = d + d b + \left(d - 1\right) b^{2} + ... + 3 b^{d-2} + 2 b^{d-1} + b^{d}\).
📗 Iterative Deepening Search (Space Complexity): \(S = \left(b - 1\right) d + 1\).

📗 Informed Search
📗 Admissible Heuristic: \(h : 0 \leq h\left(s\right) \leq h^\star\left(s\right)\), where \(h^\star\left(s\right)\) is the actual cost from state \(s\) to the goal state, and \(g\left(s\right)\) is the actual cost of the initial state to \(s\).

📗 Local Search
📗 Hill Climbing (Valley Finding), probability of moving from \(s\) to a state \(s'\) \(p = 0\) if \(f\left(s'\right) \geq f\left(s\right)\) and \(p = 1\) if \(f\left(s'\right) < f\left(s\right)\), where \(f\left(s\right)\) is the cost of the state \(s\).
📗 Simulated Annealing, probability of moving from \(s\) to a worse state \(s'\) = \(p = e^{- \dfrac{\left| f\left(s'\right) - f\left(s\right) \right|}{T\left(t\right)}}\) if \(f\left(s'\right) \geq f\left(s\right)\) and \(p = 1\) if \(f\left(s'\right) < f\left(s\right)\), where \(T\left(t\right)\) is the temperature as time \(t\).
📗 Genetic Algorithm, probability of get selected as a parent in cross-over: \(p_{i} = \dfrac{F\left(s_{i}\right)}{\displaystyle\sum_{j=1}^{n} F\left(s_{j}\right)}\), \(i = 1, 2, ..., N\), where \(F\left(s\right)\) is the fitness of state \(s\).

📗 Adversarial Search
📗 Sequential Game (Alpha Beta Pruning): prune the tree if \(\alpha \geq \beta\), where \(\alpha\) is the current value of the MAX player and \(\beta\) is the current value of the MIN player.
📗 Simultaneous Move Game (rationalizable): remove an action \(s_{i}\) of player \(i\) if it is strictly dominated \(F\left(s_{i}, s_{-i}\right) < F\left(s'_{i}, s_{-i}\right)\), for some \(s'_{i}\) of player \(i\) and for all \(s_{-i}\) of the other players.
📗 Simultaneous Move Game (Nash equilibrium): \(\left(s_{i}, s_{-i}\right)\) is a (pure strategy) Nash equilibrium if \(F\left(s_{i}, s_{-i}\right) \geq F\left(s'_{i}, s_{-i}\right)\) and \(F\left(s_{i}, s_{-i}\right) \geq F\left(s_{i}, s'_{-i}\right)\), for all \(s'_{i}, s'_{-i}\).







Last Updated: April 29, 2024 at 1:11 AM