Séminaire de Statistique


Université Paris-Saclay


Ecole Polytechnique


C. Butucea

A. B. Tsybakov

J. Josse

E.  Moulines

M. Rosenbaum


Lundi / Monday 14h - 15h15 – SALLE 3001 ENSAE


Sept 2018

Oct 2018

Nov 2018

Déc 2018

Jan 2019

Fév 2019

Mar 2019

Avr 2019

Mai 2019

Juin 2019



















Sept 10 2018

Alexander Meister


Title: Nonparametric density estimation for intentionally corrupted functional data

Abstract: We consider statistical models where functional data are artificially contaminated by independent Wiener processes in order to satisfy privacy constraints. We show that the corrupted observations have a Wiener density which determines the distribution of the original functional random variables uniquely, and we construct a nonparametric estimator of that density. We derive an upper bound for its mean integrated squared error which 
has a polynomial convergence rate, and we establish an asymptotic lower bound on the minimax convergence rates which is close to the rate attained by our estimator. Our estimator requires the choice of a basis and of two smoothing parameters. We propose data-driven ways of choosing them and prove that the asymptotic quality of our estimator is not significantly affected by the empirical parameter selection. We examine the numerical performance of our method via simulated examples.

This talk is based on a joint work with Aurore Delaigle (University of Melbourne, Australia).

Sept 17 2018

Christophe Giraud

University Paris Sud

Title: Partial recovery bounds for clustering with (corrected) relaxed Kmeans (1/2)

Abstract: We will explain why, in a clustering context, Kmeans must and can be debiased. We will then discuss how a convex relaxation of the corrected Kmeans can be applied in various setting (including mixture of sub-Gaussian distribution, SBM, Ising-Block model, etc) and we will provide some optimal exponential bounds in terms of partial recovery of the clusters. Hence, the relaxed (corrected) Kmeans appears to be a versatile clustering tool, with the nice feature to have a single tuning parameter (the number K of clusters). Based on a joint work with Nicolas Verzelen.

Sept 24 2018

Séminaire Parisien de Statistique



Oct 1 2018

Christophe Giraud

University Paris Sud

Title: Partial recovery bounds for clustering with (corrected) relaxed Kmeans (2/2)


Oct 8 2018

Stefan Wager

Stanford University

Title: Machine Learning for Causal Inference


Abstract : Flexible estimation of heterogeneous treatment effects lies at the heart of many statistical challenges, such as personalized medicine and optimal resource allocation. In this talk, I will discuss general principles for estimating heterogeneous treatment effects in observational studies via loss minimization, and then present a random forest algorithm that builds on these principles. As established both formally and empirically, the proposed approach is an order of magnitude more robust to confounding that direct regression-based baselines.

Oct 15 2018

Mathias Drtn

Université de Copenhague

Title : Causal discovery in linear non-Gaussian models


Abstract: We consider the problem of inferring the causal graph underlying a structural equation model from an i.i.d. sample.  It is well known that this graph is identifiable only under special assumptions.

We consider one such set of assumptions, namely, linear structural equations with non-Gaussian errors, and discuss inference of the causal graph in high-dimensional settings as well as in the presence of latent confounders.

Joint work with Y. Samuel Wang.


Oct 18 et 19, 2018


14 – 16 :30


Salle 2043

Anatoly Judtsky

Université Grenoble-Alpes


Cours OFPR –1 et 2

Title: Statistical Estimation via Convex Optimization

Abstract: When speaking about links between Statistics and Optimization, what comes to mind first is the indispensable role played by optimization algorithms in the “numerical toolbox” of Statistics. The goal of this course is to present another type of links between Optimization and Statistics. We are speaking of situations where Optimization theory (theory, not algorithms!) is of methodological value in Statistics, acting as the source of statistical inferences. We focus on utilizing Convex Programming theory, mainly due to its power, but also due to the desire to end up with inference routines reducing to solving convex optimization problems and thus implementable in a computationally efficient fashion.

The topics we consider are:

·        As a starter, we consider estimation of a linear functional of unknown “signal” (a signal in the “usual sense,” a distribution, or an intensity of a Poisson process, etc). We also discuss the problem of estimating quadratic functional by “lifting” linear functional estimates. As an application, we consider a signal recovery procedure – “polyhedral estimate” – which relies upon efficient estimation of linear functionals.

·        Next, we turn to general problem of linear estimation of signals from noisy observations of their linear images. Here application of Convex Optimization allows to propose provably optimal (or nearly so) estimation procedures.

The exposition does not require prior knowledge of Statistics and Optimization; as far as these disciplines are concerned, all necessary for us facts and concepts are introduced before being used. The actual prerequisites are elementary Calculus, Probability, Linear Algebra and (last but by far not least) general mathematical culture.

Oct 22 2018



Oct 29 2018

Séminaire Parisien de Statistique



Nov 5 2018

Pas de séminaire


Nov 7 et 9 2018

14 – 16 :30

Salle 2043

Anatoly Judtsky

Université Grenoble-Alpes


Cours OFPR –3 et 4

Title: Statistical Estimation via Convex Optimization


Nov 12


Karim Lounici

Ecole Polytechnique

Title: On-line PCA: non-asymptotics statistical guarantees for the Krasulina scheme

Principal Component Analysis is a popular method used to analyse the covariance structure $\Sigma$ of a random vector. Recent results on the statistical properties of standard PCA have highlighted the importance of the effective rank as a measure of the intrinsic statistical complexity in the PCA problem. In particular, optimal rates of estimation of the spectral projectors have been established in the offline setting where all the observations are available at once and a batch estimation method is implemented. In the online setting, observations arrive in a stream and our estimate of eigenvalues and spectral projectors are updated every time a new observation is available. This problem has attracted a lot a attention recently but little is known on the statistical properties of the existing methods. In this work, we consider the Krasulina scheme (stochastic gradient ascent scheme) and establish non-asymptotic estimation bounds in probability for the spectral projectors. For this method, the effective rank also plays a central role in the performance of the method, however the obtained rate is slower than that obtained in the offline setting.

Nov 19


Séminaire Parisien de Statistique



Nov 26


Stanislas Minsker

University of Southern California


Dec 3


Nabil Mustafa


Title : Sampling in Geometric Configurations

In this talk I will present recent progress on the problem of sampling in geometric configurations. A typical problem: given a set P of n points in d-dimensions, and a parameter eps>0, is it possible to pick a set Q of O(1/eps) points of P such that any half-space containing at least eps*n points of P must contain some point of Q. Based on joint works with Imre Barany, and Arijit Ghosh and Kunal Dutta.

Dec 10




Dec 17


Pas de séminaire


Jan 7




Jan 14




Jan 21




Jan 28










Sept 2017

Oct 2017

Nov 2017

Déc 2017

Jan 2018

Fév 2018

Mar 2018

Avr 2018

Mai 2018

Juin 2018






Sept 11 2017

Arnak Dalalyan


Title : User-friendly bounds for sampling from a log-concave density using Langevin Monte Carlo


Abstract :  We will present new bounds on the sampling error in the case where the target distribution has a smooth and log-concave density. These bounds are established for the Langevin Monte Carlo and its discretized versions involving the Hessian matrix of the log-density. We will also discuss the case where accurate evaluation of the gradient is impossible.

Sept 18 2017

Séminaire Parisien de Statistique - IHP


Sept 25 2017

Mathias Trabs

Université de Hamburg

Title : Volatility estimation for stochastic PDE’s using high-frequency observations


Abstract : We study the parameter estimation for parabolic, linear, second order, stochastic partial differential equations (SPDEs) observing a mild solution on a discrete grid in time and space. A high-frequency regime is considered where the mesh of the grid in the time variable goes to zero. Focusing on volatility estimation, we provide an explicit and easy to implement method of moments estimator based on the squared increments of the process. The estimator is consistent and admits a central limit theorem. Starting from a representation of the solution as an infinite factor model, the theory considerably differs from the statistics for semi-martingales literature. The performance of the method is illustrated in a simulation study.

This is joint work with Markus Bibinger.




Oct 2 2017

Nicolas Marie

Modal’X (Paris 10)/ ESME Sudria

Title : Estimation non-paramétrique dans les équations différentielles dirigées par le mouvement brownien fractionnaire.


Abstract : Après avoir introduit quelques notions de calcul stochastique trajectoriel, l’exposé présentera un estimateur type Nadaraya-Watson de la fonction de drift d’une équation différentielle dirigée par un bruit multiplicatif fractionnaire. Afin d’établir la consistance de l’estimateur, les résultats d’ergodicité de Hairer et Ohashi (2007) seront énoncés et expliqués. Une fois sa consistence établie, la question de la vitesse de convergence de l’estimateur sera abordée. Il s’agit d’un travail en collaboration avec F. Comte.

Oct 9 2017

Zoltan Szabo

Ecole Polytechnique

Title : Characteristic Tensor Kernels


Abstract : Maximum mean discrepancy (MMD) and Hilbert-Schmidt independence criterion (HSIC) are popular techniques in data science to measure the difference and the independence of random variables, respectively. 

Thanks to their kernel-based foundations, MMD and HSIC are applicable on a variety of domains including documents, images, trees, graphs, time series, mixture models, dynamical systems, sets, distributions, permutations. Despite their tremendous practical success, quite little is known about when HSIC characterizes independence and MMD with tensor kernel can discriminate probability distributions, in terms of the 
contributing kernel components. In this talk, I am going to present a complete answer to this question, with conditions which are often easy to verify in practice. [Joint work with Bharath K. Sriperumbudur (PSU). 

Preprint: https://arxiv.org/abs/1708.08157]

Oct 16 2017

Séminaire Parisien de Statistique - IHP


Oct 23 2017

Philip Thompson



Oct 30 2017






Nov 6 2017

Martin Kroll


Title : On minimax optimal and adaptive estimation of linear functionals in inverse Gaussian sequence space models


Abstract : We consider an inverse problem in a Gaussian sequence space model where the multiplication operator is not known but only available via noisy observations. Our aim is not to reconstruct the solution itself but the value of a linear functional of the solution. In our setup the optimal rate depends on two different noise levels, the noise level concerning the observation of the transformed solution and the noise level concerning the noisy observation of the operator.  We consider this problem from a minimax point of view and obtain upper and lower bounds under smoothness assumptions on the multiplication operator and the unknown solution.  Finally, we sketch an approach to the adaptive estimation in the given model using a method combining both model selection and the Goldenshluger-Lepski method.

This is joint work in progress with Cristina Butucea (ENSAE) and Jan Johannes (Heidelberg)


Nov 13 2017

Séminaire Parisien de Statistique - IHP


Nov 20 2017

Olivier Collier

Université Paris Nanterre

Title : Estimation robuste de la moyenne en temps polynômial


Abstract : Il s'agit de résultats obtenus en collaboration avec Arnak Dalalyan. Quand les observations sont polluées par la présence d'outliers, il n'est plus souhaitable d'estimer l'espérance par la moyenne empirique. Des méthodes optimales ont été trouvées, comme la profondeur de Tuckey dans le modèle dit de contamination. Cependant, ce dernier estimateur n'est pas calculable en temps polynômial. Dans un modèle gaussien, nous remarquerons que l'estimation de la moyenne revient à l'estimation d'une fonctionnelle linéaire sous contrainte de sparsité de groupe. Il est alors naturel d'utiliser group-lasso. Nous pourrons alors noter plusieurs phénomènes intéressants : dans ce contexte, la sparsité par groupe permet un gain polynômial par rapport à la seule sparsité, alors que les études précédentes montraient au mieux un gain logarithmique, et il semble que l'estimation en temps polynômial ne puisse pas atteindre la performance optimale des méthodes en temps exponentiel.

Nov 27 2017

Elisabeth Gassiat

Université Paris-Sud

Title : Estimation of the proportion of explained variation in high dimensions.


Abstract : Estimation of heritability of a phenotypic trait based on genetic data may be set as estimation of the proportion of explained variation in high dimensional linear models. I will be interested in understanding the impact of:

— not knowing the sparsity of the regression parameter,

— not knowing the variance matrix of the covariates

on minimax estimation of heritability.

In the situation where the variance of the design is known, I will present an estimation procedure that adapts to unknown sparsity. 

when the variance of the design is unknown and no prior estimator of it is available,  I will show that  consistent estimation of heritability is impossible.

(Joint work with N. Verzelen, and PHD thesis of A. Bonnet).





Dec 4 2017

Philip Thomson


Title : Stochastic approximation with heavier tails


Abstract : We consider the solution of convex optimization and variational inequality problems via the stochastic approximation methodology where the gradient or operator can only be accessed through an unbiased stochastic oracle. First, we show that (non-asymptotic) convergence is possible with unbounded constraints and a "multiplicative noise" model: the oracle is Lipschitz continuous with a finite pointwise variance which may not be uniformly bounded (as classically assumed). In this setting, our bounds depend on local variances at solutions and the method uses noise reduction in an efficient manner: given a precision, it respects a near-optimal sample and averaging complexities of Polyak-Ruppert's method but attains the order of the (faster) deterministic iteration complexity. Second, we discuss a more "robust" version where the Lipschitz constant L is unknown but, in terms of error precision, near-optimal complexities are maintained. A price to pay when L is unknown is that a large sample regime is assumed (still respecting the complexity of the SAA estimator) and "non-martingale-like" dependencies are introduced. These dependencies are coped with an "iterative localization" argument based on empirical process theory and self-normalization. 

Joint work with A. Iusem (IMPA), A. Jofré (CMM-Chile) and R.I. Oliveira (IMPA).

Dec 11 2017

Jamal Najim


Title : Grandes matrices de covariance empiriques 


Abstract : Les modèles de grandes matrices de covariance empirique ont été énormément étudiés depuis l’article fondateur de Marchenko et Pastur en 1967, qui ont décrit le comportement du spectre de telles matrices quand les deux dimensions (dimension des observations et taille de l’échantillon) croissent vers l’infini au même rythme. 


L’objectif de cet exposé sera dans un premier temps de présenter les résultats standards (et moins standards!) liés à ces modèles et les outils mathématiques d’analyse (principalement la transformée de Stieltjes). On insistera ensuite sur le cas particulier des « spiked models », modèles de grandes matrices de covariance dans lesquels quelques valeurs propres sont éloignées de la masse des valeurs propres de la matrice considérée. Ces « spiked models »  sont très populaires en finance, économétrie, traitement du signal, etc.


Enfin, on s’intéressera à des grandes matrices de covariance dont les observations sont issues d’un processus à mémoire longue. Pour de telles observations, on décria le comportement asymptotique et des fluctuations de la plus grande valeur propre. Ce travail est issu d’une collaboration avec F. Merlevède et P. Tian. 


Dec 18 2017

Pas de séminaire





Jan 8 2018

Eric Moulines

Ecole Polytechnique

Title : Algorithmes de simulation de Langevin


Abstract : Les algorithmes de Langevin ont connu récemment un vif regain d’intérêt dans la communauté de l’apprentissage statistique, suite aux travaux de M. Welling et Y.W. Teh (‘Bayesian learning via Stochastic gradient Langevin dynamics’, ICML, 2011). Cette méthode couplant approximation stochastique et méthode de simulation permet d’envisager la mise en œuvre de méthodes de simulation en grande dimension et pour des grands ensembles de données. Les applications sont très nombreuses à la fois dans les domaines «classiques » des statistiques bayésiennes (inférence bayésienne, choix de modèles) mais aussi en optimisation bayésienne.

Dans cet exposé, nous présenterons quelques travaux récents sur l’analyse de convergence de cet algorithme. Nous montrerons comment obtenir des bornes explicites de convergence en distance de Wasserstein et en variation totale dans différents cadres (fortement convexe, convexe différentiable, super-exponentiel, etc.). Nous nous intéresserons tout particulièrement à la dépendance de ces bornes dans la dimension du paramètre. Nous montrerons aussi comment étendre ces méthodes pour des fonctions convexes mais non différentiables en nous inspirant des méthodes de gradient proximaux.


Jan 15 2018

Séminaire Parisien de Statistique – IHP


Jan 22 2018


Data Science

Jan 29 2018

Anatoli Juditsky

Université de Grenoble-Alpes

Titre : Aggrégation des estimateurs à partir d’observations indirectes


Nous considérons le problème d’agrégation d'estimation adaptative dans le cas où des observations indirectes du signal sont disponibles. Nous proposons une approche au problème d’agrégation par tests quasi-optimaux d'hypothèses convexes basée sur la réduction du problème statistique d’agrégation à des problèmes d'optimisation convexe admettant une analyse et une mise en œuvre efficace.

On montre que cette approche conduit aux algorithmes quasi-optimaux dans le problème classique de l’agrégation - L_2 pour différents schémas d'observation (par exemple, observations gaussiennes indirectes, modèle d'observations de Poisson et échantillonnage à partir d’une loi discrète). Nous discutons également le lien avec le problème lié d’estimation adaptative.





Feb 5


Frédéric Chazal


Title: An introduction to persistent homology in Topological Data Analysis and the density of expected persistence diagrams.


Abstract: Persistence diagrams play a fundamental role in Topological Data Analysis (TDA) where they are used as topological descriptors of data represented as point cloud. They consist in discrete multisets of points in the plane $\R^2$ that can equivalently be seen as discrete measures in $\R^2$. In a first part of the talk, we will introduce the notions of persistent homology and persistence diagrams and show how they are built from point cloud data (so no knowledge in TDA is required to follow the talk). In the second part of the talk we will show a few properties of persistence diagrams when the data come as a random point cloud. In this case, persistence diagrams become random discrete measures and we will show that, in many cases, their expectation has a density with respect to Lebesgue measure in the plane and we will discuss its estimation.

This is a joint work with Vincent Divol (ENS Paris / Inria DataShape team)


Feb 12


Séminaire Parisien de Statistique - IHP


Feb 19


Laëtitia Comminges

Université Paris-Dauphine

Some effects in adaptive robust estimation under sparsity



Adaptive estimation in the sparse mean model and in sparse regression exhibits some interesting effects.

This paper considers estimation of a sparse target vector, of its $\ell_2$-norm and of the noise variance in the sparse linear model. We establish the optimal rates of adaptive estimation when adaptation is considered  with respect to the triplet "noise level -- noise distribution -- sparsity". These rates turn out to be different from the minimax non-adaptive rates when the triplet is known. A crucial issue is the ignorance of the noise level. Moreover, knowing or not knowing the noise distribution can also influence the rate. For example, the rates of estimation of the noise level can differ depending on whether the noise is Gaussian or sub-Gaussian without a precise knowledge of the distribution.  Estimation of noise level in our setting can be viewed as an adaptive variant of robust estimation of scale in the contamination model, where instead of fixing the "nominal" distribution in advance we assume that it belongs to some class of distributions. We also show that in the problem of estimation of a sparse vector under the $\ell_2$-risk when the variance of the noise in unknown, the optimal rate depends dramatically on the design. In particular, for noise distributions with polynomial tails, the rate can range from sub-Gaussian to polynomial depending on the properties of the design.

Feb 26







Mars 5


Rajarshi Mukherjee


Global Testing Against Sparse Alternatives under Ising Models

Abstract: We study the effect of dependence on detecting sparse signals. In particular, we focus on global testing against sparse alternatives for the magnetizations of an Ising model and establish how the interplay between the strength and sparsity of a signal determines its detectability under various notions of dependence (i.e. the coupling constant of the Ising model). The impact of dependence is best illustrated under the Curie-Weiss model where we observe the effect of a "thermodynamic" phase transition. In particular, the critical state exhibits a subtle "blessing of dependence" phenomenon in that one can detect much weaker signals at criticality than otherwise. Furthermore, we develop a testing procedure that is broadly applicable to account for dependence and show that it is asymptotically minimax optimal under fairly general regularity conditions. This talk is based on joint work with Sumit Mukherjee and Ming Yuan.

Mars 12 2018

Pas de séminaire


Mars 19 2018

Yihong Wu 10h30-12h30


Polynomial method in statistical estimation : from large domain to mixture models – 1

Mars 22


Yihong Wu 14h-17h



Mars 26


Yihong Wu 10h30-12h30



14h00 – 17h00






Ecole doctorale – exposés des doctorants


Mars 29 2018

Yihong Wu 14h-17h






Avril 2 2018



April 5


Angelika Rohde 14h-15h15

Freiburg Universität


Geometrizing rates of convergence under privacy constraints

Abstract : We study estimation of a functional $\theta(\Pr)$ of an unknown probability distribution $\Pr \in\P$ in which the original iid sample $X_1,\dots, X_n$ is kept private even from the statistician via an $\alpha$-local differential privacy constraint. Let $\omega_1$ denote the modulus of continuity of the functional $\theta$ over $\P$, with respect to total variation distance. For a large class of loss functions $l$, we prove that the privatized minimax risk is equivalent to $l(\omega_1((n\alpha^2)^{-1/2}))$ to within constants, under regularity conditions that are satisfied, in particular, if $\theta$ is linear and $\P$ is convex. Our results extend the theory developed by Donoho and Liu (1991) to the nowadays highly relevant case of privatized data. Somewhat surprisingly, the difficulty of the estimation problem in the private case is characterized by $\omega_1$, whereas, it is characterized by the Hellinger modulus of continuity if the original data $X_1,\dots, X_n$ are available. We also provide a general recipe for constructing rate optimal privatization mechanisms and illustrate the general theory in numerous examples. Our theory allows to quantify the price to be paid for local differential privacy in a large class of estimation problems.


April 6


Cheng Mao – 14h-15h15


Breaking the n^{-1/2} barrier for permutation-based ranking models

 Abstract : The task of ranking from pairwise comparison data arises frequently in various applications, such as recommender systems, sports tournaments and social choice theory. There has been a recent surge of interest in studying permutation-based models, such as the noisy sorting model and the strong stochastic transitivity model, for ranking from pairwise comparisons. Although permutation-based ranking models are richer than traditional parametric models, a wide gap exists between the statistically optimal rate n^{-1} and the rate n^{-1/2} achieved by the state-of-the-art computationally efficient algorithms. In this talk, I will discuss new algorithms that achieve rates n^{-1} and n^{-3/4} for the noisy sorting model and the more general strong stochastic transitivity model respectively.

The talk is based on joint works with Jonathan Weed, Philippe Rigollet, Ashwin Pananjady and Martin J. Wainwright.


Avril 9 2018

Séminaire Parisien de Statistique - IHP


Avril 13 2018

Eric Kolaczyk – 14h-15h15

Boston University

Title:  Dynamic Networks with Multi-scale Temporal Structure


Abstract: We describe a novel method for modeling non-stationary multivariate time series, with time-varying conditional dependencies represented through dynamic networks. Our proposed approach combines traditional multi-scale modeling and network based neighborhood selection, aiming at capturing temporally local structure in the data while maintaining sparsity of the potential interactions. Our multi-scale framework is based on recursive dyadic partitioning, which recursively partitions the temporal axis into finer intervals and allows us to detect local network structural changes at varying temporal resolutions. The dynamic neighborhood selection is achieved through penalized likelihood estimation, where the penalty seeks to limit the number of neighbors used to model the data. We present theoretical and numerical results describing the performance of our method, which is motivated and illustrated using task-based magnetoencephalography (MEG) data in neuroscience.  This is joint work with Xinyu Kang and Apratim Ganguly.


Avril 23 2017



Avril 30 2018

Pas de séminaire





May 7 2018

Pas de séminaire


May 14




May 21 2018

Jour férié


May 22


Mikhail Belkin 10h-13h

Ohia State University

The Differential Geometry of Data - 1

May 23 2018

Mikhail Belkin 16h-18h

Ohia State University


May 28 2018

Sivaraman Balakrishnan

Carnegie Melon


May 29 2018

Mikhail Belkin 10h-13h salle-2003

Ohia State University


May 30 2018

Mikhail Belkin 16h-18h sale-2040

Ohia State University





June 4

Séminaire Parisien de Statistique - IHP



June 11 2018

Jeremy Heng

Harvard University

Title: Controlled sequential Monte Carlo


Sequential Monte Carlo methods, also known as particle methods, are a popular set of techniques to approximate high-dimensional probability distributions and their normalizing constants. They have found numerous applications in statistics and related fields as they can be applied to perform state estimation for non-linear non-Gaussian state space models and Bayesian inference for complex static models. Like many Monte Carlo sampling schemes, they rely on proposal distributions which have a crucial impact on their performance. We introduce here a class of controlled sequential Monte Carlo algorithms, where the proposal distributions are determined by approximating the solution to an associated optimal control problem using an iterative scheme. We provide theoretical analysis of our proposed methodology and demonstrate significant gains over state-of-the-art methods at a fixed computational complexity on a variety of applications.


June 18 2018

Mihai Cucuringu

Oxford University

Title: Laplacian-based methods for ranking and constrained clustering

We consider the classic problem of establishing a statistical ranking of a set of n items given a set of inconsistent and incomplete pairwise comparisons between such items. Instantiations of this problem occur in numerous applications in data analysis (e.g., ranking teams in sports data), computer vision, and machine learning. We formulate the above problem of ranking with incomplete noisy information as an instance of the group synchronization problem over the group SO(2) of planar rotations, whose usefulness has been demonstrated in numerous applications in recent years. Its least squares solution can be approximated by either a spectral or a semidefinite programming relaxation, followed by a rounding procedure. We perform extensive numerical simulations on both synthetic and real-world data sets, showing that our proposed method compares favorably to other algorithms from the recent literature. We also briefly discuss ongoing work on extensions and applications of the group synchronization framework to k-way synchronization, list synchronization, synchronization with heterogeneous information and partial rankings, and phase unwrapping.

We also present a simple spectral approach to the well-studied constrained clustering problem. It captures constrained clustering as a generalized eigenvalue problem with graph Laplacians. The algorithm works in nearly-linear time and provides concrete guarantees for the quality of the clusters, at least for the case of 2-way partitioning, via a generalized Cheeger inequality. In practice this translates to a very fast implementation that consistently outperforms existing spectral approaches both in speed and quality.


June 27 2018

Wednesday -14h

Emtiyaz Khan

Riken, Tokyo

Title: Fast yet Simple Natural-Gradient Variational Inference in Complex Models


Approximate Bayesian inference is promising in improving generalization and reliability of deep learning, but is computationally challenging. Modern variational-inference (VI) methods circumvent the challenge by formulating Bayesian inference as an optimization problem and then solving it using gradient-based methods. In this talk, I will argue in favor of natural-gradient approaches which can improve convergence of VI by exploiting the information geometry of the solutions. I will discuss a fast yet simple natural-gradient method obtained by using a duality associated with exponential-family distributions. I will summarize some of our recent results on Bayesian deep learning, where natural-gradient methods lead to an approach which gives simpler updates than existing VI methods while performing comparably to them.

Joint work with Wu Lin (UBC), Didrik Nielsen (RIKEN), Voot Tangkaratt (RIKEN), Yarin Gal (UOxford), Akash Srivastva (UEdinburgh), Zuozhu Liu (SUTD).

Based on: