Publications

2024
Contracting with a Learning Agent

Contracting with a Learning Agent

Many real-life contractual relations differ completely from the clean, static model at the heart of principal-agent theory. Typically, they involve repeated strategic interactions of the principal and agent, taking place under uncertainty and over time. While appealing in theory, players seldom use complex dynamic strategies in practice, often preferring to circumvent complexity and approach uncertainty through learning. We initiate the study of repeated contracts with a learning agent, focusing on agents who achieve no-regret outcomes.

TLDR: We study repeated contracts with a learning agent and provide an optimal solution to a canonical contract setting.
Toolbox: Game Theory, Contract Theory, No-Regret Learning

Guru Guruganesh*, Yoav Kolumbus*, Jon Schneider*, Inbal Talgam-Cohen*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*, Joshua R. Wang*, S. Matthew Weinberg*

39th Conference on Neural Information Processing Systems (NeurIPS)

*:Alphabetical order; Equal contribution

[Presentation]



2024
Curvature-Independent Last-Iterate Convergence for Games on Riemannian Manifolds

Curvature-Independent Last-Iterate Convergence for Games on Riemannian Manifolds

In many machine learning applications, we aim to find a point of balance or equilibrium on curved spaces known as Riemannian manifolds. In this work, we make a significant advancement for the seminal Riemannian gradient descent (RGD) method. Our breakthrough, a first-of-its-kind, reveals that RGD converges to NE at a linear rate in strongly monotone settings, unaffected by distance distortions or the curvature of the space.

TLDR: Riemannian GD can compute NE with a step size that is agnostic to the underlying curvature.
Toolbox: Differential Geometry, Game Theory, Convex Optimization, Potential Analysis

Yang Cai*, Michael I. Jordan*, Tianyi Lin*, Argyris Oikonomou*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

Neural Information Processing Systems (Submitted) (NeurIPS)

*:Alphabetical order; Equal contribution



2024
Smoothed Complexity of SWAP in Local Graph Partitioning

Smoothed Complexity of SWAP in Local Graph Partitioning

Local search is a widely used method for tackling optimization problems. However, even though it often works well in real-world scenarios, theoretical evaluations sometimes suggest it could be very slow. This contradiction arises because these theoretical tests often involve unlikely, extreme cases. In this study, we discovered that when minor unpredictable changes (or 'noise') are introduced to the problem input, algorithms like SWAP, 1-FLIP, and 2-FLIP efficiently handle a range of binary optimization challenges.

TLDR: Despite theoretical concerns, local search algorithms like SWAP and 2-FLIP handle provably real-world optimization tasks efficiently.
Toolbox: Smoothed Analysis, Applied Probability, Combinatorics, Graph Theory

Xi Chen*, Chenghao Guo*, Mihalis Yannakakis*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

35th ACM-SIAM Symposium on Discrete Algorithms (SODA)

*:Alphabetical order; Equal contribution

[Presentation]



2023
Exploiting Hidden Structures in Non-convex Games for Convergence to Nash Equilibrium

Exploiting Hidden Structures in Non-convex Games for Convergence to Nash Equilibrium

From adversarial attacks to multi-agent reinforcement learning, in vast majority of ML tasks, we're essentially playing multi-agent games striving for a Nash equilibrium. Beneath the surface of these complex non-convex games' landscapes often lies a simpler convex substructure. This research unveils that core using a pioneering method, the 'hidden gradient descent,' illuminating the hidden framework and nudging the game towards the Nash equilibrium.

TLDR: 'Hidden gradient descent': A novel method for computing Nash equilibrium in ML complex non-convex landscapes with hidden convex cores.
Toolbox: Game Theory, Convex Optimization, Dynamical Systems

Iosif Sakos1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Panayotis Mertikopoulos2, Georgios Piliouras2

37th Conference on Neural Information Processing Systems (NeurIPS)

1/2. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster]



2023
Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

In machine learning tasks, the Stochastic Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) algorithms have stood out in min-max optimization and variational inequalities (VIP). While constant step-size versions offer benefits like ease of tuning, their convergence patterns remain intricate. Through Markov Chains, our study provides pivotal insights establishing a first-of-its-kind Law of Large Numbers and the Central Limit Theorem, demonstrating their consistent performance. Finally, we show how to improve the solution accuracy in VIPs using Richardson-Romberg extrapolation.

TLDR: Stochastic EG and GDA algorithms are governed by the Law of Large Numbers and the Central Limit Theorem and their accuracy can be boosted using Richardson extrapolation.
Toolbox: Probability, Ergodic Theory, Markov Chains, Optimization, Dynamical Systems

Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Angeliki Giannou2, Yudong Chen3, Qiaomin Xie3

37th Conference on Neural Information Processing Systems (NeurIPS)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier



2023
Quadratic Speedup in Finding Nash Equilibria of Quantum Zero-Sum Games

Quadratic Speedup in Finding Nash Equilibria of Quantum Zero-Sum Games

Quantum game theory has become increasingly relevant with advances in machine learning and related fields. A key goal is to find optimal strategies, or Nash equilibria, in competitive settings. Our new method, the Single-Call Matrix Mirror Prox (1-MMP), significantly improves on previous algorithms, offering faster and more efficient solutions.

TLDR: State-of-art method for Computing Nash Equilibria in Quantum Zero-Sum Games.
Toolbox: Quantum Computing, Game Theory, Convex Optimization

Francisca Vasconcelos1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Panayotis Mertikopoulos2, Georgios Piliouras3, Umesh Vazirani3, Michael I. Jordan3

7th Conference on Quantum Techniques in Machine Learning (QTML)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

Oral talk on 'Quantum Game Theory' thread (Top 1% Accepted Papers)


2023
The Computational Complexity of Multi-player Concave Games and Kakutani Fixed Points

The Computational Complexity of Multi-player Concave Games and Kakutani Fixed Points

Kakutani’s Fixed Point theorem is pivotal in game theory and economics, but its computational versions have been limited. This study offers a broad computational view of the theorem, showing it's PPAD-complete. It delves into multi-player concave games, highlighting the challenges of finding equilibriums, and sheds light on the Walrasian Equilibrium's complexity by proving an advanced version of Berge’s maximum theorem.

TLDR: Computing Kakutani's Fixpoints is PPAD-complete. Both Rosen-Nash Eq. in concave games and Walrasian Eq. in exchange markets share similar complexity for general utilities.
Toolbox: Computational Complexity, Game Theory, Convex Analysis

Christos Papadimitriou*, Manolis Zampetakis*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

24th ACM Conference on Economics and Computations (EC)

*:Alphabetical order; Equal contribution

[Poster] [Presentation] [Recorded Talk]

Submitted to Games and Economic Behavior Journal (GEB)


2023
Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games

Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games

Adversarial team games involve a team competing against a single opponent in a win-lose situation. In this research, we show that finding an approximate Nash equilibrium in these games is as complex as a category called continuous local search (CLS) by showing that the Moreau envelop of a suitable best response function acts as a potential under certain natural gradient-based dynamics.

TLDR: Computing Nash Equilibria in adversarial team games is CLS-complete.
Toolbox: Computational Complexity, Game Theory, Convex Analysis, Linear Programming

Fivos Kalogiannis*, Ioannis Anagnostides*, Ioannis Panageas*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

24th ACM Conference on Economics and Computations (EC)

Student Leading Paper / *. Equal Contribution

[Presentation] [Recorded Talk]

Submitted to Games and Economic Behavior Journal (GEB)


2023
Efficiently Computing Nash Equilibria in Adversarial Team Markov Games

Efficiently Computing Nash Equilibria in Adversarial Team Markov Games

In multi-agent reinforcement learning, predicting the optimal strategy or Nash equilibrium is crucial but challenging. In this research, we focus on games where a team, without coordinating among themselves, plays against a single opponent. Such a setup captures both competitive and cooperative elements. We present a new algorithm that can find near-optimal strategies for these games efficiently.

TLDR: In multi-agent learning, we introduce an efficient algorithm to find near-optimal strategies for teams playing against a single opponent, blending both competition and cooperation settings.
Toolbox: Reinforcement Learning, Game Theory, Non-Convex Programming

Fivos Kalogiannis1, Ioannis Anagnostides2, Ioannis Panageas2, Emmanouil-Vasileios Vlatakis-Gkaragkounis3, Vaggos Chatziafratis3, Stelios Stavroulakis3

11th International Conference on Learning Representations (ICLR)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] [Presentation] [Recorded Talk]

Oral talk on 'Theory of Machine Learning' thread (Top 3% Accepted Papers)


2023
Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games

Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games

In ML, two-team games have gained prominence, especially in contexts such as e-sports and multi-GANs. Such games involve teams that compete without internal coordination. Our research highlights that standard methods, including GDA, OGDA, and EG, fall short in identifying optimal strategies. We conclude by introducing novel game dynamics, drawing inspiration from control-theory, which effectively uncovers the optimal strategy in specific conditions.

TLDR: In e-sports and multi-GANs ML team games, conventional algorithms often underperform in identifying the most optimal strategies. We present a control-theory-derived approach, yielding superior results in specific scenarios.
Toolbox: Game Theory, Control Theory, Dynamical Systems

Fivos Kalogiannis*, Ioannis Panageas*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

11th International Conference on Learning Representations (ICLR)

*:Alphabetical order; Equal contribution

[Poster] [Presentation]



2022
First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces

First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces

In machine learning, problems on curved spaces, or Riemannian manifolds, present a distinct challenge. Although solutions exist for challenges in flat spaces, those in curved spaces remain enigmatic. This research introduces techniques such as the Riemannian corrected extragradient (RCEG) method, which guarantees performance that mirrors its Euclidean counterpart.

TLDR: Addressing a century-old unsolved issue, this research reveals that Von Neumann's minimax points can be computed linearly in cases with geodesically strong convex-concave characteristics. The results match the Euclidean outcome using the Riemannian corrected extragradient (RCEG) method.
Toolbox: Differential Geometry, Game Theory, Convex Optimization, Duality Theory

Michael Jordan*, Tianyi Lin*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

36th Conference on Neural Information Processing Systems (NeurIPS)

*:Alphabetical order; Equal contribution

[Poster] [Presentation] [Recorded Talk]

Oral talk on 'Optimization & Machine Learning' thread (Top 3% Accepted Papers)


2022
On the convergence of policy gradient methods to Nash equilibria in general stochastic games

On the convergence of policy gradient methods to Nash equilibria in general stochastic games

A comprehensive framework for analyzing policy gradient methods, ranging from perfect policy gradients to estimates like the REINFORCE algorithm. This research shows that strategically stable Nash policies are likely to emerge locally. Moreover, Nash policies with a significant concave curvature can achieve a rapid squared distance rate of O(1/sqrt(n)). Remarkably, slight modifications allow for local convergence to deterministic Markov Nash equilibria in a fixed period, regardless of the game inherent unpredictability.

TLDR: Local Convergence Guarantees of Policy Gradient Methods in a Multi-Agent RL environment.
Toolbox: Multi-Agent RL, Convex Optimization, Applied Probability, Dynamical Systems

Angeliki Giannou*, Kyriakos Lotidis*, Panayotis Mertikopoulos*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

36th Conference on Neural Information Processing Systems (NeurIPS)

*:Alphabetical order; Equal contribution

[Poster] [Presentation] [Recorded Talk]



2022
Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals

Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals

This study delves into the problem of understanding intersections of halfspaces under Gaussian distribution in the agnostic learning model. The contribution refines the Statistical Query (SQ) lower bounds to (1/n^logn), coming close to the best-known upper bounds. The approach also extends to SQ learning for other categories such as 'convex subspace juntas' and sets defined by a limited Gaussian surface area.

TLDR: Defined the lower bounds for agnostically learning intersections of halfspaces with Gaussian distribution using SQ learning.
Toolbox: Boolean Learning Theory, Fourier Analysis, High-Dimensional Probability

Daniel Hsu*, Rocco Servedio*, Clayton Sanford*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

35th Annual Conference on Learning Theory (COLT)

*:Alphabetical order; Equal contribution

[Presentation] [Recorded Talk]



2021
On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond

On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond

This paper delves into the convergence properties of the 'Follow the Generalized Leader' (FTGL) approach, integrating both traditional and optimistic no-regret strategies, towards achieving Nash Equilibria in $N$-player generic games. The findings underscore that the convergence rate of FTGL techniques to strict Nash equilibria remains independent of player uncertainties and is majorly influenced by the regularizer's geometry near the equilibrium.

TLDR: Exploration of local convergence rates for FTGL techniques in pursuit of Nash Equilibria in generic games.
Toolbox: Game Theory, Convex Optimization, Dynamical Systems, Applied Probability

Angeliki Giannou1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Panayotis Mertikopoulos2

35th Conference on Neural Information Processing Systems (NeurIPS)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] [Presentation] [Recorded Talk]



2021
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent

Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent

This study delves into gradient descent ascent (GDA) dynamics in specialized non-convex non-concave zero-sum games, termed as 'hidden zero-sum games.' Players manipulate inputs of non-linear functions that channel into a convex-concave game. When the internal game possesses strict convex-concave characteristics, GDA gravitates towards its von-Neumann equilibrium. Otherwise, GDA might not converge, but leveraging certain techniques ensures its convergence. Uniquely, our convergence findings are non-local, which sets it apart in this domain. The research also draws a parallel between our continuous dynamics discoveries and the training of generative adversarial networks.

TLDR: Exploration of convergence criteria for GDA in hidden zero-sum games and drawing connections to the training methodology of generative adversarial networks.
Toolbox: Game Theory, Dynamical Systems, Lyapunov Potentials

Lampros Flokas1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Georgios Piliouras2

35th Conference on Neural Information Processing Systems (NeurIPS)

1/2. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] [Presentation] [Recorded Talk]



2021
On the Approximation Power of Two-Layer Networks of Random ReLUs

On the Approximation Power of Two-Layer Networks of Random ReLUs

This research navigates the approximation capacities of depth-two ReLU networks that kickstart with stochastic weights on the initial layer. With scrupulous scrutiny, our findings deliver nearly identical upper and lower thresholds for the L-2 and Sobolev norms-approximation, taking into account various determinants such as the Lipschitz constant, desired accuracy, and dimensionality of the issue.

TLDR: Deciphering stringent upper-lower limits for two-layered ReLU networks, possessing random weights from the outset, in the portrayal of smooth functions.
Toolbox: Deep Learning Theory, Ridgelet Fourier Analysis, Applied Probability

Daniel Hsu*, Clayton Sanford*, Rocco Servedio*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

34th Annual Conference on Learning Theory (COLT)

*:Alphabetical order; Equal contribution

[Poster] [Presentation] [Recorded Talk] [Blog Post]



2021
Reconstructing weighted voting schemes from partial information about their power indices

Reconstructing weighted voting schemes from partial information about their power indices

Computational Social Choice delves into dissecting various voting systems. While historical research aimed to infer voting structures from the weightage of each participant, this exploration pushes the envelope by grappling with circumstances where the complete influence of a voter is obfuscated. Tackling this, the study unveils two groundbreaking strategies rooted in Chow parameters and Shapley indices, paving the way to combat the challenges posed by incomplete data in the realm of voting mechanisms.

TLDR: Design of algorithms to decode voting infrastructures from fragments of data on voter influence.
Toolbox: Boolean Learning Theory, Fourier Analysis, Computational Social Choice

Huck Bennett*, Anindya De*, Rocco Servedio*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*

34th Annual Conference on Learning Theory (COLT)

*:Alphabetical order; Equal contribution

[Presentation] [Recorded Talk]



2021
Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information

Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information

Venturing into the intricacies of multi-player game mechanics, we scrutinize the Nash equilibrium's determinants. The paper places a profound emphasis on the 'Follow the Regularized Leader' (FTRL) algorithm. Confronting a spectrum of uncertainties depending on feedback, we discern an unequivocal association between the stability and exclusiveness of a Nash equilibrium. Precisely, a Nash equilibrium retains stability and attraction under the FTRL paradigm with overwhelmingly high probability if it remains strictly unique.

TLDR: 'Follow the Regularized Leader' (FTRL) algorithm reveals that a Nash equilibrium is stable and appealing with a significant probability only when it remains strictly unique.
Toolbox: Game Dynamics, Stochastic Stability Analysis, Dynamical Systems

Angeliki Giannou1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Panayotis Mertikopoulos2

34th Annual Conference on Learning Theory (COLT)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] [Presentation] [Recorded Talk]



2020
No-regret learning and mixed Nash equilibria: They do not mix

No-regret learning and mixed Nash equilibria: They do not mix

In our study of N-player games, we explored the convergence patterns of no-regret learning dynamics towards Nash equilibria. While it's recognized that play frequency gravitates towards coarse correlated equilibria, the relationship with Nash equilibria remains ambiguous. We proved that using 'follow-the-regularized-leader' dynamics, mixed Nash equilibria and no-regret learning are fundamentally misaligned – only strict Nash equilibria reliably surface as stable endpoints in such scenarios.

TLDR: Mixed NE are unstable under no-regret dynamics in any N-player games.
Toolbox: Game Theory, Dynamical Systems, Invariant Measures Analysis, Topology

Lampros Flokas1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Thanasis Lianeas2, Panayotis Mertikopoulos3, Georgios Piliouras3

34th Conference on Neural Information Processing Systems (NeurIPS)

1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] [Presentation] [Recorded Talk]

Spotlight talk on 'Game Theory & Optimization' thread


2020
Optimal Private Median Estimation under Minimal Distributional Assumptions

Optimal Private Median Estimation under Minimal Distributional Assumptions

Estimating the median of a distribution from limited samples, while preserving data privacy, is a foundational problem in computer science. We address this problem even for distributions with minimal assumptions and potential outliers. We devise a time-efficient, privacy-preserving algorithm that achieves this optimal rate for median's estimation, leveraging a versatile Lipschitz Extension Lemma.

TLDR: Optimal Sample and Efficient algorithm for private median estimation.
Toolbox: Differential Privacy, Algorithmic Statistics, Dynamic Programming

Christos Tzamos*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*, Ilias Zadik*

34th Conference on Neural Information Processing Systems (NeurIPS)

*:Alphabetical order; Equal contribution

[Poster] [Presentation] [Recorded Talk]

Spotlight talk on 'Security & Privacy' thread


2020
Smoothed Complexity of Local Max-Cut and Binary Max-CSP

Smoothed Complexity of Local Max-Cut and Binary Max-CSP

Local search is a widely used method for tackling optimization problems. However, even though it often works well in real-world scenarios, theoretical evaluations sometimes suggest it could be very slow. This contradiction arises because these theoretical tests often involve unlikely, extreme cases. In this study, we discovered that when minor unpredictable changes (or 'noise') are introduced to the problem input, algorithms like FLIP methods efficiently handle a range of binary optimization challenges.

TLDR: Despite theoretical concerns, local search algorithms based on single flips handle provably real-world optimization tasks efficiently.
Toolbox: Smoothed Analysis, Applied Probability, Combinatorics, Graph Theory

Xi Chen*, Chenghao Guo*, Mihalis Yannakakis*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*, Xinzhi Zhang*

52nd Annual ACM Symposium on Theory of Computing (STOC)

*:Alphabetical order; Equal contribution

[Presentation] [Video]

Spotlight talk on 'Game Theory & Optimization' thread


2019
Poincaré Recurrence, Cycles, and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games

Poincaré Recurrence, Cycles, and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games

We analyze the behavior of standard Gradient Descent-Ascent dynamics in the context of hidden bilinear games. Players in these games control the inputs of a function tied to a bilinear zero-sum game, similar to the competition in Generative Adversarial Networks. Our findings reveal that GDA dynamics in these games might not always converge to the game-theoretically meaningful solution. Instead, they can display recurrent behaviors or even converge to spurious non-min-max equilibria.

TLDR: In several ML non-convex scenarios, standard GDA dynamics show behaviors that don't converge to expected game solutions, including periodic behavior and convergence to non-min-max equilibria.
Toolbox: GANs, Game Dynamics, Invariant Measures, Lyapunov Potentials, Topology

Lampros Flokas1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Georgios Piliouras2

33rd Conference on Neural Information Processing Systems (NeurIPS)

1/2. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster] [Presentation] [Video]

Spotlight talk on 'Game Theory & Optimization' thread


2019
Efficiently avoiding saddle points with zero order methods: No gradients required

Efficiently avoiding saddle points with zero order methods: No gradients required

Pushing the boundaries of gradient-free optimization in machine learning, we studied methods that don't need gradient information, only function values. Designing a novel 'zero-order method,' we efficiently reach good solutions and avoid problematic saddle points. Our method matches the speed of traditional approaches without introducing unnecessary complexities seen in earlier approaches.

TLDR: A simple derivative-free method efficiently avoids saddle points.
Toolbox: Non-convex Optimization, Applied Probability, Dynamical Systems

Lampros Flokas1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Georgios Piliouras2

33rd Conference on Neural Information Processing Systems (NeurIPS)

1/2. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier

[Poster]



2018
Pattern Search Multidimensional Scaling

Pattern Search Multidimensional Scaling

Nonlinear manifold learning is crucial in ML for unveiling hidden data structures. We offer a new twist on the classic multi-dimensional scaling (MDS) method by utilizing derivative-free optimization techniques. Specifically, instead of the usual gradient descent, we sample and evaluate potential 'moves' within a set sphere for every point in the embedded space.

TLDR: Derivative-free methods for multi-dimensional scaling via complex data structures.
Toolbox: Manifold Learning, Non-convex & Black-box Optimization, Local Search Heuristics

Georgios Paraskevopoulos1, Efthymios Tzinis1, Emmanouil-Vasileios Vlatakis-Gkaragkounis1, Alexandros Potamianos2

Transactions of Machine Learning Research (Submitted) (TMLR)

1/2. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier