The 2021 madPL Affiliates Program Events

 

Zoom link: https://uwmadison.zoom.us/j/96339695501?pwd=VzZ4M01xUldSQzVhc1REdG94RHhFZz09

 

Schedule: Feb 2nd (in central time)

 

Eric Meijer (Facebook) will give a talk at 11am and talks from students in the PL group will start at 12pm.

Time

Presenter

Title

11:00-12:00

Eric Meijer (Facebook)

A Pure Functional Programmer’s Perspective on The Calculus of Variations

12:00-12:08

Anna Meyer

Certifying Robustness to Programmable Data Bias in Decision Trees

12:08-12:16

Yuhao Zhang

Robustness to Programmable String Transformations in NLP Models

12:16-12:24

Jordan Henkel

Data Science on Code

12:24-12:32

Amanda Xu,

Abtin Molavi

Synthesizing a Quantum Compiler

12:32-12:40

Keith Johnson

Generalized Top-Down Semantics-Guided Synthesis

 

 

 

 

 

 

 

 

 

Name: Eric Meijer

Title: A Pure Functional Programmer’s Perspective on The Calculus of Variations

Abstract:

The goal of deep learning is to find an intensional function f expressed as a “neural network” of weights and activation functions that minimizes a loss function over a set of input/output pairs. This training data serves an extensional representation of the desired function. The goal of the calculus of variations is to find a function f that minimizes a higher-order function H(f) that typically represents the modeler’s understanding of the physical world.

 

In physics applications, H[f] = ∫L(x, f(x), f’(x))dx is defined in terms of a Lagrangian L that relates the kinetic and potential energy of the system being modelled. In that case, the function f that minimizes H[f] describes the natural evolution of the system over time. But the calculus of variations is not limited to physics as it also covers the deep learning case by defining H[f]= ∑loss(f(xi)-yi) using a standard loss function such as squared error. Furthermore, there are many further applications of variational calculus in computer graphics, economics, and optimal control. Hence there are plenty of reasons for computer scientists to explore what is often considered one of the most beautiful branches of mathematics, and as we will show, has many deep ties to pure functional programming.

 

As Euler and Lagrange have shown 1755, to find stationary values of higher-order functions like H[f] = ∫L(x, f(x), f’(x))dx, we need to extend multivariate calculus from finite vectors to infinite vectors, or in other words functions. In many cases it is possible to solve H[f] analytically, but to search for stationary values of H[f] numerically like we do for neural nets, we must perform gradient descent in function space. To efficiently implement this, we rely on a concrete representation of the function value we are searching, such as a polynomial approximation or neural net over a set of weights, and implement function updating by updating the weights of the representation. This closes the circle with training neural nets using first-order gradient descent.

 

In this fun & informal talk we will develop the elementary calculus of variations via nilpotent infinitesimals and show that everything we know from regular calculus on finite vectors carries over directly to variational calculus on infinite vectors/functions. We will not assume any knowledge beyond basic high-school calculus and straightforward equational reasoning.

 

 

 

 

 

Name: Anna Meyer

Title: Certifying Robustness to Programmable Data Bias in Decision Trees

Abstract:

Datasets are biased by various factors: societal inequities, human biases, under-representation of minorities, etc. In this talk, I will present the concept of data-bias robustness, that is, the idea that machine learning algorithms should be robust to small changes in the training data. This is a challenging problem, since it entails learning models for a large, or even infinite, number of datasets and ensuring that they all produce the same prediction for a given input. I will show how to prove data bias-robustness for decision-tree learning algorithms by using a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. Finally, I will describe some of our results that evaluate our approach using a variety of bias models (i.e., sources of bias) and datasets.

 

 

 

Name: Yuhao Zhang

Title: Robustness to Programmable String Transformations in NLP Models

Abstract:

Deep neural networks for natural language processing tasks are vulnerable to adversarial input perturbations. I will introduce a versatile language for programmatically specifying string transformations—e.g., insertions, deletions, substitutions, swaps, etc.—that are relevant to the task at hand. I will present an approach, A3T, to adversarially training models that are robust to such user-defined string transformations. A3T combines the advantages of search-based techniques for adversarial training with abstraction-based techniques. Specifically, we show how to decompose a set of user-defined string transformations into two-component specifications, one that benefits from search and another from abstraction. I will present another approach, ARC, which combines memoization and abstraction, to certify the robustness of LSTMs (and extensions of LSTMs) and train models that can be efficiently certified.

 

 

 

Name: Jordan Henkel

Title: Data Science on Code

Abstract:

Whether you think the next decade of software development will be traditional or 2.0, one truth remains: there’s a lot of code out there. In this short talk, I’ll present my vision for Data Science on Code---a new paradigm focused on kick-starting the next decade of software development by borrowing from the successes of the Data Science community. With Data Science on Code I hope to make interacting with, understanding, maintaining, and debugging large amounts of code easier for everyone.

 

 

 

Name: Amanda Xu and Abtin Molavi

Title: Synthesizing a Quantum Compiler

Abstract:

Decades ago, examples like Shor’s ingenius factorization algorithm demonstrated the theoretical potential of quantum computing to revolutionize the space of tractable algorithms. Today, we are in the midst of a transition from theory to reality, as physical quantum computing architectures evolve rapidly, promising to unlock a deeper understanding of the natural world. This new development in hardware demands a robust software foundation, including optimizing quantum compilers. We aim to accelerate the iterative development of new quantum architectures by synthesizing quantum compilers. We will describe our ongoing work on synthesizing a quantum compiler in two stages. First we will discuss the optimization phase, which will automatically discover semantics-preserving rewrite rules for a given quantum gate set to avoid the tedious work of manually deriving optimizations. Then we will turn to the mapping phase, wherein additional instructions are inserted into a program in order to satisfy the physical constraints of a particular architecture.

 

 

 

Name: Keith Johnson

Title: Generalized Top-Down Semantics-Guided Synthesis

Abstract:

Semantics-guided synthesis (SemGuS) is a general synthesis framework that allows a user to provide both the syntax and semantics for the target language. However, this diversity of potential grammars and semantics makes it difficult to build a performant general-purpose SemGuS synthesizer.

We tackle this challenge by drawing from successes in recent top-down propagation synthesis algorithms and generalizing these algorithms into a common algorithmic framework. This generalization and separation of mechanism from policy allows techniques from disparate algorithms to be combined and specialized for individual synthesis problems.