📗 For some problems, every state is a solution, only some states are better than other states specified by a cost function (sometimes score or reward): Wikipedia.
📗 The search strategy will go from state to state, but the path between states is not important.
📗 Local search assumes similar (nearby) states have similar costs, and search through the state space by iteratively improving the costs to find an optimal state.
📗 The successor states are called neighbors (or move set).
📗 Hill climbing is the discrete version of gradient descent: Wikipedia.
➩ It starts at a random state.
➩ Move to the best neighbor (successor) state.
➩ Stop when all neighbors are worse than the current state (local minimum).
📗 Random restarts can be used to pick multiple random initial states and find the best local minimum (similar to neural network training).
📗 If there are too many neighbors, first choice hill climbing randomly generates neighbors until a better neighbor is found.
testq
📗 Notes and code adapted from the course taught by Professors Jerry Zhu, Yingyu Liang, and Charles Dyer.
📗 Content from note blocks marked "optional" and content from Wikipedia and other demo links are helpful for understanding the materials, but will not be explicitly tested on the exams.
📗 Please use Ctrl+F5 or Shift+F5 or Shift+Command+R or Incognito mode or Private Browsing to refresh the cached JavaScript.
📗 You can expand all TopHat Quizzes and Discussions: , and print the notes: , or download all text areas as text file: .
📗 If there is an issue with TopHat during the lectures, please submit your answers on paper (include your Wisc ID and answers) or this Google form Link at the end of the lecture.