Prev: W7 Next: W8

# Summary

📗 Tuesday to Friday lectures: 1:00 to 2:15, Zoom Link
📗 Saturday review sessions: 5:30 to 8:30, Zoom Link
📗 Personal meeting room: always open, Zoom Link
📗 Quiz (use your wisc ID to log in (without "@wisc.edu")): Socrative Link
📗 Math Homework: M8, M9, M10, M11,
📗 Programming Homework: P4, P5,
📗 Examples and Quizzes: Q15, Q16, Q17, Q18, Q19, Q20, Q20, Q22, Q23, Q24,

# Lectures

📗 Slides (before lecture, usually updated on Sunday):
Blank Slides: Part 1, Part 2,
Blank Slides (with blank pages for quiz questions): Part 1, Part 2,
📗 Slides (after lecture, usually updated on Friday):
Blank Slides with Quiz Questions: Part 1, Part 2,
Annotated Slides: Part 1, Part 2,
📗 Review Session: PDF.

📗 My handwriting is really bad, you should copy down your notes from the lecture videos instead of using these.

📗 Notes

# Midterm Statistics

Exam F1A F1A: Mean = 86.05%, Stdev = 16.91
Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
MAX 4 4 4 3 3 3 4 4 4 4 4 4 4 4 1
PROB 0.76 0.88 0.73 0.92 0.76 0.90 0.96 0.96 0.92 0.96 0.76 0.98 0.96 1 1
RPBI 6.20 6.15 5.28 4.23 5.54 4.58 4.25 4.25 5.34 4.14 6.82 3 4.25 0 0


Exam F2A F2A: Mean = 85.92%, Stdev = 17.81
Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
MAX 3 4 4 3 4 4 4 4 4 4 4 4 4 4 1
PROB 0.73 0.84 0.78 0.92 0.94 0.98 0.84 0.94 0.98 0.96 0.88 0.90 0.96 0.65 1
RPBI 5.06 5.87 6.55 3.87 4.52 2.82 6.39 4.18 2.83 3.69 5.72 5.34 3.80 6.44 0


Exam FB FB: Mean = 72.08%, Stdev = 15.63
Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
MAX 3 3 3 4 3 3 3 3 3 4 3 3 4 4 4 4 4 4 4 4 2 3 3 4 4 4 4 3 2 1
PROB 0.58 0.17 0.17 0.50 0.75 0.58 0.58 0.67 0.42 0.75 0.33 0.50 0.75 0.42 1 0.67 0.83 0.58 0.58 0.92 0.67 0.83 0.25 1 0.92 0.75 0.83 0.50 0.50 1
RPBI 0.86 0.19 0.19 0.83 0.68 0.86 0.86 0.94 0.53 1.12 0.47 0.75 1.04 0.66 0 1.25 0.80 1.15 0.94 0.73 0.63 0.93 0.25 0 0.80 1.04 1.24 0.75 0.37 0



PROB is the percentage of students who get this question correct.
RPBI is the (point biserial) correlation between getting this question correct with the total grade on the exam.
PROB < 0.25 or RPBI < 0 means this question is probably not well-made.

# Summary

📗 Coverage: unsupervised learning + search + game theory W5 to W7.
📗 Number of questions: 30
📗 Length: 2 x 1 hour 15 minutes
📗 Regular: August 18 AMD August 19, 1:00 to 2:30 PM
📗 Make-up: August 24, 5:30 to 8:30 PM
📗 Link to relevant pages:
W5 : M8
W6 : M9 and M10
W7 : M11
Practice: X3 and X4 and X5 and X7

# Details

📗 Slides:
(1) The slides subtitled "Definition" and "Quiz" contain the mathematics and statistics that you are required to know for the exams.
(2) The slides subtitled "Motivation" and "Discussion" contain concepts you should be familiar with, but the specific mathematics will not be tested on the exam.
(3) The slides subtitled "Description" and "Algorithm" are mostly useful for programming homework, not exams.
(4) The slides subtitled "Admin" are not relevant to the course materials.
📗 Questions:
(1) Around a third of the questions will be exactly the same as the homework questions (different randomization of parameters), you can practice by solving these homework again with someone else's ID (auto-grading will not work if you do not enter an ID).
(2) Around a third of the questions will be similar to the past exam or quiz questions (ones that are covered during the lectures), going over the quiz questions will help, and solving the past exam questions will help.
(3) Around a third of the questions will be new, mostly from topics not covered in the homework, reading the slides will be helpful.
📗 Question types:
All questions will ask you to enter a number, vector (or list of options), or matrix. There will be no drawing or selecting objects on a canvas, and no text entry or essay questions. You will not get the hints like the ones in the homework. You can type your answers in a text file directly and submit it on Canvas. If you use the website, you can use the "calculate" button to make sure the expression you entered can be evaluated correctly when graded. You will receive 0 for incorrect answers and not-evaluate-able expressions, no partial marks, and no additional penalty for incorrect answers.

# Other Materials

📗 Videos Going through Past Exam Questions
X3Q9-10 (PCAs): Link
X3Q11-12 (Hierarchical): Link
X3Q13-15 (K-Means): Link
X4Q1-3 (Search): Link
X4Q4-5 (Informed): Link
X4Q6-7 (Local Search): Link
X4Q8-9 (Genetic): Link
X4Q10-13 (Alpha Beta): Link
X4Q14-15 (Mixed NE): Link
X5Q1-4 (Pure NE): Link

📗 Pre-recorded Videos from 2020
Lecture 23 (Repeated Games): Interactive Lecture (see Canvas Zoom recording)
Lecture 24 (Mechanism Design): Interactive Lecture (see Canvas Zoom recording)

📗 Relevant websites
2022 Online Exams:
F1A-C Permutations: Link
F2A-C Permutations: Link
FB-C Permutations: Link
FA-E Permutations: Link
FB-E Permutations: Link

2021 Online Exams:
F1A-C Permutations: Link
F1B-C Permutations: Link
F2A-C Permutations: Link
F2B-C Permutations: Link

2020 Online Exams:
F1A-C Permutations: Link
F1B-C Permutations: Link
F2A-C Permutations: Link
F2B-C Permutations: Link
F1A-E Permutations: Link
F1B-E Permutations: Link
F2A-E Permutations: Link
F2B-E Permutations: Link

2019 In-person Exams:
Final Version A: File
Version A Answers: CECBC DBBBA BEEDD BCACB CBEED DDCDC ACBCC ECABC
Final Version B: File
Version B Answers: EEAEE AEACE BBDED BDAAA DCEEA CDACA AEAAA CCABB
Sample final: Link
Video going through sample final very quickly: Link

Past exam other professors made:
Professor Zhu: Link
Professor Dyer: Link
Relevant questions:
Midterms: F18Q1,2,3,4,5,6,7,8,9,10,11,12,13,14; F17Q1,2,3,4,5,6,7,8,9,10,11,12,13; F16Q1,2,3,4,5,6,7,8,9,10; F14Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20; F11Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,18,20; F10Q1,2,3,4; F09Q1,3,4,5,6; F08Q1,2,3,5; F06Q1,2,3,4,5,6,7,8,9,10,11,12; F05Q1,2,3,4,5,6,7,8,9,10,11,14,19,20; F19Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32; S18Q1,2,3,4,5,6,7,8,9; S17Q1,2,3,4,5,6,7,8
Final Exams: F17Q1,2,3,4,5,6,7,10,11,12,13,14,15,17,18,19,20,21,22,23,24,25; F16Q1,2,3,4,5,6,7,8,9,10,11,13,14,15,17,18; F14Q1,2,3,4,5,9,10,13,14,15,16,17,19,20; F13Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20; F12Q1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,20; F10Q1,2,3,4,5,6,10,11,12,13,14,15,16,17,18,19,20; F09Q1,2,3,4,5,6,7,8,10,11,12,13,17,19,20; F08Q1,2,3,4,5,6,7; F06Q1,2,3,4,5,6,10,11,13,14,15,16,17,18,19,20; F05Q1,2,3,4,5,6,10,11,13,14,15,16,17,18,19,20; F19Q6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32; S18Q3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33; S17Q2,3,4,5,6,7,8,9,10


📗 YouTube videos from 2019 and 2020
L17Q1: Link
L17Q2: Link
M8Q6: Link
M8Q8: Link
M9Q2: Link
M9Q7: Link
M10Q1: Link
M10Q4: Link
M10Q5Q6: Link
M12Q1: Link
M12Q5: Link
M12Q7: Link
From Lectures
L16Q1 (UCS): Link
L17Q1 (Hill-climbing SAT): Link
L17Q2 (Genetic Algorithm): Link
Other Final Exam Questions
Q4 (Shape of Quickest IDS): Link
Q5 (Pure against Mixed): Link
Q6 (Switch Lights): Link
Q7 (K Means Cluster Assignment): Link
Q8 (Vaccination Game): Link



# Keywords and Notations

📗 Clustering
📗 Single Linkage: \(d\left(C_{k}, C_{k'}\right) = \displaystyle\min\left\{d\left(x_{i}, x_{i'}\right) : x_{i} \in C_{k}, x_{i'} \in C_{k'}\right\}\), where \(C_{k}, C_{k'}\) are two clusters (set of points), \(d\) is the distance function.
📗 Complete Linkage: \(d\left(C_{k}, C_{k'}\right) = \displaystyle\max\left\{d\left(x_{i}, x_{i'}\right) : x_{i} \in C_{k}, x_{i'} \in C_{k'}\right\}\).
📗 Average Linkage: \(d\left(C_{k}, C_{k'}\right) = \dfrac{1}{\left| C_{k} \right| \left| C_{k'} \right|} \displaystyle\sum_{x_{i} \in C_{k}, x_{i'} \in C_{k'}} d\left(x_{i}, x_{i'}\right)\), where \(\left| C_{k} \right|, \left| C_{k'} \right|\) are the number of the points in the clusters.
📗 Distortion (Euclidean distance): \(D_{K} = \displaystyle\sum_{i=1}^{n} d\left(x_{i}, c_{k^\star\left(x_{i}\right)}\left(x_{i}\right)\right)^{2}\), \(k^\star\left(x\right) = \mathop{\mathrm{argmin}}_{k = 1, 2, ..., K} d\left(x, c_{k}\right)\), where \(k^\star\left(x\right)\) is the cluster \(x\) belongs to.
📗 K-Means Gradient Descent Step: \(c_{k} = \dfrac{1}{\left| C_{k} \right|} \displaystyle\sum_{x \in C_{k}} x\).

📗 Projection: \(\text{proj} _{u_{k}} x_{i} = \left(\dfrac{u_{k^\top} x_{i}}{u_{k^\top} u_{k}}\right) u_{k}\) with length \(\left\|\text{proj} _{u_{k}} x_{i}\right\|_{2} = \left(\dfrac{u_{k^\top} x_{i}}{u_{k^\top} u_{k}}\right)\), where \(u_{k}\) is a principal direction.
📗 Projected Variance (Scalar form, MLE): \(V = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} \left(u_{k^\top} x_{i} - \mu_{k}\right)^{2}\) such that \(u_{k^\top} u_{k} = 1\), where \(\mu_{k} = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} u_{k^\top} x_{i}\).
📗 Projected Variance (Matrix form, MLE): \(V = u_{k^\top} \hat{\Sigma} u_{k}\) such that \(u_{k^\top} u_{k} = 1\), where \(\hat{\Sigma}\) is the convariance matrix of the data: \(\hat{\Sigma} = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} \left(x_{i} - \hat{\mu}\right)\left(x_{i} - \hat{\mu}\right)^\top\), \(\hat{\mu} = \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} x_{i}\).
📗 New Feature: \(\left(u_{1^\top} x_{i}, u_{2^\top} x_{i}, ..., u_{K^\top} x_{i}\right)^\top\).
📗 Reconstruction: \(x_{i} = \displaystyle\sum_{i=1}^{m} \left(u_{k^\top} x_{i}\right) u_{k} \approx \displaystyle\sum_{i=1}^{K} \left(u_{k^\top} x_{i}\right) u_{k}\) with \(u_{k^\top} u_{k} = 1\).

📗 Uninformed Search
📗 Breadth First Search (Time Complexity): \(T = 1 + b + b^{2} + ... + b^{d}\), where \(b\) is the branching factor (number of children per node) and \(d\) is the depth of the goal state.
📗 Breadth First Search (Space Complexity): \(S = b^{d}\).
📗 Depth First Search (Time Complexity): \(T = b^{D-d+1} + ... + b^{D-1} + b^{D}\), where \(D\) is the depth of the leafs.
📗 Depth First Search (Space Complexity): \(S = \left(b - 1\right) D + 1\).
📗 Iterative Deepening Search (Time Complexity): \(T = d + d b + \left(d - 1\right) b^{2} + ... + 3 b^{d-2} + 2 b^{d-1} + b^{d}\).
📗 Iterative Deepening Search (Space Complexity): \(S = \left(b - 1\right) d + 1\).

📗 Informed Search
📗 Admissible Heuristic: \(h : 0 \leq h\left(s\right) \leq h^\star\left(s\right)\), where \(h^\star\left(s\right)\) is the actual cost from state \(s\) to the goal state, and \(g\left(s\right)\) is the actual cost of the initial state to \(s\).

📗 Local Search
📗 Hill Climbing (Valley Finding), probability of moving from \(s\) to a state \(s'\) \(p = 0\) if \(f\left(s'\right) \geq f\left(s\right)\) and \(p = 1\) if \(f\left(s'\right) < f\left(s\right)\), where \(f\left(s\right)\) is the cost of the state \(s\).
📗 Simulated Annealing, probability of moving from \(s\) to a worse state \(s'\) = \(p = e^{- \dfrac{\left| f\left(s'\right) - f\left(s\right) \right|}{T\left(t\right)}}\) if \(f\left(s'\right) \geq f\left(s\right)\) and \(p = 1\) if \(f\left(s'\right) < f\left(s\right)\), where \(T\left(t\right)\) is the temperature as time \(t\).
📗 Genetic Algorithm, probability of get selected as a parent in cross-over: \(p_{i} = \dfrac{F\left(s_{i}\right)}{\displaystyle\sum_{j=1}^{n} F\left(s_{j}\right)}\), \(i = 1, 2, ..., N\), where \(F\left(s\right)\) is the fitness of state \(s\).

📗 Adversarial Search
📗 Sequential Game (Alpha Beta Pruning): prune the tree if \(\alpha \geq \beta\), where \(\alpha\) is the current value of the MAX player and \(\beta\) is the current value of the MIN player.
📗 Simultaneous Move Game (rationalizable): remove an action \(s_{i}\) of player \(i\) if it is strictly dominated \(F\left(s_{i}, s_{-i}\right) < F\left(s'_{i}, s_{-i}\right)\), for some \(s'_{i}\) of player \(i\) and for all \(s_{-i}\) of the other players.
📗 Simultaneous Move Game (Nash equilibrium): \(\left(s_{i}, s_{-i}\right)\) is a (pure strategy) Nash equilibrium if \(F\left(s_{i}, s_{-i}\right) \geq F\left(s'_{i}, s_{-i}\right)\) and \(F\left(s_{i}, s_{-i}\right) \geq F\left(s_{i}, s'_{-i}\right)\), for all \(s'_{i}, s'_{-i}\).






Last Updated: April 29, 2024 at 1:11 AM