📗 GPT stands for Generative Pre-trained Transformer.
➩ Unsupervised learning (convert text to numerical vectors).
➩ Supervised learning: (1) discriminative (predict answers based on questions), (2) generative (predict next word based on previous word).
➩ Reinforcement learning (update model based on human feedback).
TopHat Discussion
📗 Have you used ChatGPT (or another Large Language Model)? What did you use LLM for?
➩ Solve homework or exam questions? For CS540, it is possible with some prompt engineering: Link.
➩ Write code for projects? For CS540, it is allowed to use large language models (LLMs) to help with writing code (at the moment, most of LLMs cannot write complete projects).
➩ Write stories or create images? In the past, there were CS540 assignments asking students to use earlier versions of GPT to perform these tasks and compare the results with human creations.
📗 A sentence is a sequence of words (tokens). Each unique word token is called a word type. The set of word types of called the vocabulary.
📗 A sentence with length can be represented by .
📗 The probability of observing a word at position of the sentence can be written as (or in statistics ).
📗 N-gram model is a language model that assumes the probability of observing a word at position only depends on the words at positions . In statistics notation, (the is pronounced as "given", is "the probability of a given b".
📗 Given a training set (many sentences or text documents), is estimated by , where is the size of the vocabulary (and the vocabulary is ), and is the number of times the word appeared in the training set.
📗 This is called the maximum likelihood estimator because it maximizes the likelihood (probability) of observing the sentences in the training set.
Math Note
📗 Suppose the vocabulary is , and with based on the a training set with number of 's and number of 's. Then the probability of observing the sentence is , which is maximized at .
TopHat Quiz
📗 [1 points] Given a training set (the script of "Guardians of the Galaxy" for Vin Diesel: Wikipedia), "I am Groot, I am Groot, ... (13 times), ..., I am Groot, We are Groot". What is the maximum likelihood estimates of the unigram model based on this training set? What is the probability of observing a new sentence "I am Groot" based on the estimated gram model?
📗 Markov property means or the probability distribution of observing only depends on the previous word in the sentence . A visualiation of Markov chains: Link.
📗 The maximum likelihood estimator of is , where is the number of times the phrase (sequence of words) appeared in the training set.
Math Note
📗 Conditional probability is defined as , so , where is the probability of observing the phrase .
TopHat Quiz
📗 [1 points] Given a training set (the script of "Guardians of the Galaxy" for Vin Diesel: Wikipedia), "I am Groot, I am Groot, ... (13 times), ..., I am Groot, We are Groot". What is the maximum likelihood estimates of the unigram model based on this training set? What is the probability of observing a new sentence "I am Groot" based on the estimated gram model?
📗 The bigram probabilities can be stored in a matrix called the transition matrix of a Markov chain. The number in row column is the probability or the estimated probability : Link.
📗 Given the initial distribution of word types, the distribution of the next token can be found by multiplying the transition matrix by the initial distribution.
📗 The stationary distribution of a Markov chain is an initial distribution such that all subsequent distributions will be the same as the initial distribution, which means if the transition matrix is , then the stationary distribution is a distribution satisfying .
Math Note
📗 An alternative way to compute the stationary distribution (if it exists) is by starting with any initial distribution and multiply it by infinite number of times (that is ).
📗 It is easier to find powers of diagonal matrices, so if the transition matrix can be written as where is a diagonal matrix (off-diagonal entries are 0, diagonal entries are called eigenvalues), and is the matrix where the columns are eigenvectors, then .
📗 The same formula can be applied to trigram models: .
📗 In a document, some longer sequences of tokens never appear, for example, when never appears, the maximum likelihood estimator will be and undefined. As a result, Laplace smoothing (add-one smoothing) is often used: , where is the number of unique words in the document.
📗 Laplace smoothing can be used for bigram and unigram models too: for bigram and for unigram.
📗 A machine learning data set usually contains features (text, images, ... converted to numerical vectors) and labels (categories, converted to integers).
➩ Features: , where , and is called feature (or attribute) of instance (or item) .
➩ Labels: , where is the label of item .
📗 Supervised learning: given training set , estimate a prediction function to predict based on a new item .
➩ Generative model estimates and predicts using Bayes rule: Wikipedia.
📗 Unsupervised learning: given training set , put points into groups (discrete groups or "continuous" lower dimensional representations).
📗 Reinforcement learning: given an environment with states and reward when action is performed in state , estimate the optimal policy that selects the best action in state that maximizes the total reward.
📗 Given a document and vocabulary with size , let be the number of times word appears in the document , the bag of words representation of document is , where .
📗 Sometimes, the features are not normalized, meaning .
📗 Term frequency is defined the same way as in the bag of words features, .
📗 Inverse document frequency is defined as , where is the number of documents that contain word .
📗 TF IDF representation of document is , where .
TopHat Quiz
📗 [1 points] Given three documents "Guardians of the Galaxy", "Guardians of the Galaxy Vol. 2", "Guardians of the Galaxy Vol. 3", compute the bag of words features and the TF-IDF features of the 3 documents.
📗 If the documents are labeled, then a supervised learning task is: given a training set of document features (for example, bag of words, TF-IDF) and their labels, estimate a function that predicts the label for new documents.
➩ Given emails, predict whether they are spams or hams.
➩ Given comments, predict whether they are offensive or not.
➩ Given reviews, predict whether they are positive or negative.
➩ Given essays, predict the grade A, B, ... or F.
➩ Given documents, predict which language it is from.
📗 If the training set is , where are features of the documents, and are labels, then the problem is to estimate , and given a new document , the predicted label can be the that maximizes .
📗 Naive Bayes classifier is a simple Bayesian network that assumes the features are independent: Wikipedia.
📗 The key assumption is the independence assumption: .
TopHat Quiz
(Past Exam Question) ID:
📗 [4 points] Consider the problem of detecting if an email message is a spam. Say we use four random variables to model this problem: a binary class variable indicates if the message is a spam, and three binary feature variables: indicating whether the message contains "Cash", "Free", "Now". We use a Naive Bayes classifier with associated CPTs (Conditional Probability Table):
📗 There are other common Naive Bayes models including multinomial naive Bayes (used when the features are bag of words without normalization) and Gaussian naive Bayes (used when the features are continuous).
📗 If the naive Bayes independence assumption is relaxed, the resulting more general model is called Bayesian network (or Bayes network).
Additional Note (Optional)
📗 If the features are bag of words (without normalization), then a common model of is the multinomial model with unigram probabilities for each label: , where is the unigram probability that word appears in a document with label .
➩ A special case when is binary or , for example, whether a document contains a word type, is called Bernoulli naive Bayes.
➩ Technically, in the multinomial distribution, are not independent due to the , but the multinomial Bayes model is still considered "naive".
➩ Multinomial naive Bayes is consider a linear model since the log posterior distribution is linear in the features: , where is some constant.
📗 If the features are continuous (not binary or integer counts), then a common model is the Gaussian naive Bayes model: , where , where is the mean of feature for documents with label , and is the variance.
➩ The maximum likelihood estimates of is the sample mean of the feature for documents with label , and is the sample variance.
📗 If the naive Bayes independence assumption is relaxed, the resulting model is called a Bayesian network (or Bayes network). Some examples of Bayesian networks: Wikipedia, Link, Link, Link.
testgu,gb,bt,au,spq
📗 Notes and code adapted from the course taught by Professors Jerry Zhu, Yingyu Liang, and Charles Dyer.
📗 Please use Ctrl+F5 or Shift+F5 or Shift+Command+R or Incognito mode or Private Browsing to refresh the cached JavaScript.
📗 If you missed the TopHat quiz questions, please submit the form: Form.