Prev: Q13 Next: Q15
Back to week 4 page: Link

# Lecture 14 Examples

📗 My handwriting is really bad, you should copy down your notes from the lecture videos instead of using these.
Lecture 14 Zoom Annotated (2021): Link
Lecture 14 Pre-recorded Annotated (from 2020, please use with caution): Link

# Review Sessions

📗 The recorded videos of the session are on Canvas (under Files and Zoom).
📗 M13 Shuyao's Notes: Link
📗 M14 Ziqian's Notes: Link
📗 The version with ID "yw": video going through M13 Link, notes: Link
📗 The version with ID "yw": video going through M14 Link, notes: Link
📗 Professor Jerry Zhu's Formula Sheet: Link
📗 Summer 2019 Formula Sheet: Link

# Discussion Sessions

📗 The recorded videos of the session are on Canvas, the notes are not very useful but here they are anyways,
Week 4 Discussion Notes: Link

# Sharing Solutions on Piazza

📗 Use the sign-up sheet: Google Sheet
📗 You can sign up and post anonymously (anonymous Piazza posts are not anonymous to instructors).
📗 You must post before the official deadline of the homework and your post must include: (1) a copy or a screenshot of your version of the question and (2) detailed solution and explanation to how you come up with the solution.
📗 Each good post will receive 0.25 points. Incorrect solutions and/or solutions without explanations will receive no points.

# Q14 Discussion Topic

📗 Please create a follow-up discussion post on the Piazza (it is okay to post anonymously). No Canvas submissions are required. The grades will be updated at the end of the week on Canvas.
📗 The official deadline is July 25: if you post after this deadline, your Quiz grade on Canvas will not be updated until the midterm or the final exam. You can post and earn the points until August 10.
📗 This discussion is to be completed after the midterm.
📗 Go to the K-means clustering demo Link and find an example in which the algorithm converges to a local minimum that is not a global minimum. Please also include the global minimum clustering for the same dataset. Share the two screenshots on Piazza: Link.
📗 Please specify in the post which data set you used (e.g. mine is Gaussian Mixture).
📗 You can use the same one I gave during the lecture, but make sure you can replicate it.

# Self-Driving Car Demo

This is a demo for Markov decision process. Moving to different tiles lead to different rewards between -1 and 1 (red: worst reward, blue: negative reward, white: zero reward, gray: positive reward, green: best reward).






📗 Environment Settings:
(1) Number of periods:
(2) Number of episodes:
(3) Initial state:
(4) Discount rate:
(5) Example reward matrices: , display:
(6) Reward matrix size:
(7) Reward matrix:

📗 Algorithm Settings:
(1) Learning rate:
(2) Learning algorithm:
(3) Exploration strategy: , \(\varepsilon\):
📗 Output:
(1) Q Functions:
(2) Total reward:






Last Updated: November 18, 2024 at 11:43 PM