Catch up on the latest AI articles

NeurIPS2020 Highlights

NeurIPS2020 Highlights

Survey
Conference on Neural Information Processing Systems (NIPS) is the world's leading conference on machine learning and will be held online in 2020 due to the coivd-19 pandemic.

Here's a summary of the conference. Here is some information about this summary.

This summary has been compiled by the Paper Digest Team into a platform called Paper Digest.
Paper Digest is a scientific and technical knowledge graph and text analysis platform for tracking, summarizing, and searching the scientific literature.
The Paper Digest Team will also be a New York-based research group working on text analysis. Text analysis is to produce results.
We are publishing this time with permission to reprint their great work in our article.
 
We are publishing this time with permission to reprint their amazing work in our article.
 
The original article is also accompanied by a code, which can be viewed here if you're interested. If you're interested in it, you can browse it here.
 
 

TITLE

HIGHLIGHT

1

A graph similarity for deep learning

We adopt kernel distance and propose transform-sum-cat as an alternative to aggregate-transform to reflect the continuous similarity between the node neighborhoods in the neighborhood aggregation.

2

An Unsupervised Information-Theoretic Perceptual Quality Metric

We combine recent advances in information-theoretic objective functions with a computational architecture informed by the physiology of the human visual system and unsupervised training on pairs of video frames, yielding our Perceptual Information Metric (PIM).

3

Self-Supervised MultiModal Versatile Networks

In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams.

4

Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method

We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system that will give rise to a particular set of measurements.

5

Off-Policy Evaluation and Learning for External Validity under a Covariate Shift

In this paper, we derive the efficiency bound of OPE under a covariate shift.

6

Neural Methods for Point-wise Dependency Estimation

In this work, instead of estimating the expected dependency, we focus on estimating point-wise dependency (PD), which quantitatively measures how likely two outcomes co-occur.

7

Fast and Flexible Temporal Point Processes with Triangular Maps

By exploiting the recent developments in the field of normalizing flows, we design TriTPP – a new class of non-recurrent TPP models, where both sampling and likelihood computation can be done in parallel.

8

Backpropagating Linearly Improves Transferability of Adversarial Examples

In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs.

9

PyGlove: Symbolic Programming for Automated Machine Learning

In this paper, we introduce a new way of programming AutoML based on symbolic programming.

10

Fourier Sparse Leverage Scores and Approximate Kernel Learning

We prove new explicit upper bounds on the leverage scores of Fourier sparse functions under both the Gaussian and Laplace measures.

11

Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

In this work, we give a general approach for improving regret bounds in online submodular maximization by exploiting “first-order” regret bounds for online linear optimization.

12

Synbols: Probing Learning Algorithms with Synthetic Datasets

In this sense, we introduce Synbols — Synthetic Symbols — a tool for rapidly generating new datasets with a rich composition of latent features rendered in low resolution images.

13

Adversarially Robust Streaming Algorithms via Differential Privacy

We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy.

14

Trading Personalization for Accuracy: Data Debugging in Collaborative Filtering

In this paper, we propose a data debugging framework to identify overly personalized ratings whose existence degrades the performance of a given collaborative filtering model.

15

Cascaded Text Generation with Markov Transformers

This work proposes an autoregressive model with sub-linear parallel time generation.

16

Improving Local Identifiability in Probabilistic Box Embeddings

In this work we model the box parameters with min and max Gumbel distributions, which were chosen such that the space is still closed under the operation of intersection.

17

Permute-and-Flip: A new mechanism for differentially private selection

In this work, we propose a new mechanism for this task based on a careful analysis of the privacy constraints.

18

Deep reconstruction of strange attractors from time series

Inspired by classical analysis techniques for partial observations of chaotic attractors, we introduce a general embedding technique for univariate and multivariate time series, consisting of an autoencoder trained with a novel latent-space loss function.

19

Reciprocal Adversarial Learning via Characteristic Functions

We generalise this by comparing the distributions rather than their moments via a powerful tool, i.e., the characteristic function (CF), which uniquely and universally comprising all the information about a distribution.

20

Statistical Guarantees of Distributed Nearest Neighbor Classification

Through majority voting, the distributed nearest neighbor classifier achieves the same rate of convergence as its oracle version in terms of the regret, up to a multiplicative constant that depends solely on the data dimension.

21

Stein Self-Repulsive Dynamics: Benefits From Past Samples

We propose a new Stein self-repulsive dynamics for obtaining diversified samples from intractable un-normalized distributions.

22

The Statistical Complexity of Early-Stopped Mirror Descent

In this paper, we study the statistical guarantees on the excess risk achieved by early-stopped unconstrained mirror descent algorithms applied to the unregularized empirical risk with the squared loss for linear models and kernel methods.

23

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

To address this limitation, we propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge (e.g., only the causal graph).

24

Quantitative Propagation of Chaos for SGD in Wide Neural Networks

In this paper, we investigate the limiting behavior of a continuous-time counterpart of the Stochastic Gradient Descent (SGD) algorithm applied to two-layer overparameterized neural networks, as the number or neurons (i.e., the size of the hidden layer) $N \to \plusinfty$.

25

A Causal View on Robustness of Neural Networks

We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data.

26

Minimax Classification with 0-1 Loss and Performance Guarantees

This paper presents minimax risk classifiers (MRCs) that do not rely on a choice of surrogate loss and family of rules.

27

How to Learn a Useful Critic? Model-based Action-Gradient-Estimator Policy Optimization

In this paper, we propose MAGE, a model-based actor-critic algorithm, grounded in the theory of policy gradients, which explicitly learns the action-value gradient.

28

Coresets for Regressions with Panel Data

This paper introduces the problem of coresets for regression problems to panel data settings.

29

Learning Composable Energy Surrogates for PDE Order Reduction

To address this, we leverage parametric modular structure to learn component-level surrogates, enabling cheaper high-fidelity simulation.

30

Efficient Contextual Bandits with Continuous Actions

We create a computationally tractable learning algorithm for contextual bandits with continuous actions having unknown structure.

31

Achieving Equalized Odds by Resampling Sensitive Attributes

We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.

32

Multi-Robot Collision Avoidance under Uncertainty with Probabilistic Safety Barrier Certificates

This paper aims to propose a collision avoidance method that accounts for both measurement uncertainty and motion uncertainty.

33

Hard Shape-Constrained Kernel Machines

In this paper, we prove that hard affine shape constraints on function derivatives can be encoded in kernel machines which represent one of the most flexible and powerful tools in machine learning and statistics.

34

A Closer Look at the Training Strategy for Modern Meta-Learning

The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments. This paper conducts a theoretical investigation of this training strategy on generalization.

35

On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law

We provide short- and long-term solutions to avoid these pitfalls and realize the benefits of OOD evaluation.

36

Generalised Bayesian Filtering via Sequential Monte Carlo

We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification.

37

Deterministic Approximation for Submodular Maximization over a Matroid in Nearly Linear Time

We study the problem of maximizing a non-monotone, non-negative submodular function subject to a matroid constraint.

38

Flows for simultaneous manifold learning and density estimation

We introduce manifold-learning flows (?-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold.

39

Simultaneous Preference and Metric Learning from Paired Comparisons

In this paper, we consider the problem of learning an ideal point representation of a user’s preferences when the distance metric is an unknown Mahalanobis metric.

40

Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee

In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors, and develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution.

41

Learning Manifold Implicitly via Explicit Heat-Kernel Learning

In this paper, we propose the concept of implicit manifold learning, where manifold information is implicitly obtained by learning the associated heat kernel.

42

Deep Relational Topic Modeling via Graph Poisson Gamma Belief Network

To better utilize the document network, we first propose graph Poisson factor analysis (GPFA) that constructs a probabilistic model for interconnected documents and also provides closed-form Gibbs sampling update equations, moving beyond sophisticated approximate assumptions of existing RTMs.

43

One-bit Supervision for Image Classification

This paper presents one-bit supervision, a novel setting of learning from incomplete annotations, in the scenario of image classification.

44

What is being transferred in transfer learning?

In this paper, we provide new tools and analysis to address these fundamental questions.

45

Submodular Maximization Through Barrier Functions

In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization.

46

Neural Networks with Recurrent Generative Feedback

The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework.

47

Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction

Motivated by this challenge, we introduce a realistic problem of few-shot out-of-graph link prediction, where we not only predict the links between the seen and unseen nodes as in a conventional out-of-knowledge link prediction task but also between the unseen nodes, with only few edges per node.

48

Exploiting weakly supervised visual patterns to learn from partial annotations

Instead, in this paper, we exploit relationships among images and labels to derive more supervisory signal from the un-annotated labels.

49

Improving Inference for Neural Image Compression

We consider the problem of lossy image compression with deep latent variable models.

50

Neuron Merging: Compensating for Pruned Neurons

In this work, we propose a novel concept of neuron merging applicable to both fully connected layers and convolution layers, which compensates for the information loss due to the pruned neurons/filters.

51

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods.

52

Reinforcement Learning with Combinatorial Actions: An Application to Vehicle Routing

We develop a framework for value-function-based deep reinforcement learning with a combinatorial action space, in which the action selection problem is explicitly formulated as a mixed-integer optimization problem.

53

Towards Playing Full MOBA Games with Deep Reinforcement Learning

In this paper, we propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning.

54

Rankmax: An Adaptive Projection Alternative to the Softmax Function

In this work, we propose a method that adapts this parameter to individual training examples.

55

Online Agnostic Boosting via Regret Minimization

In this work we provide the first agnostic online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret.

56

Causal Intervention for Weakly-Supervised Semantic Segmentation

We present a causal inference framework to improve Weakly-Supervised Semantic Segmentation (WSSS).

57

Belief Propagation Neural Networks

To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP).

58

Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality

Our work proves convergence to low robust training loss for \emph{polynomial} width instead of exponential, under natural assumptions and with ReLU activations.

59

Post-training Iterative Hierarchical Data Augmentation for Deep Networks

In this paper, we propose a new iterative hierarchical data augmentation (IHDA) method to fine-tune trained deep neural networks to improve their generalization performance.

60

Debugging Tests for Model Explanations

We investigate whether post-hoc model explanations are effective for diagnosing model errors–model debugging.

61

Robust compressed sensing using generative models

In this paper we propose an algorithm inspired by the Median-of-Means (MOM).

62

Fairness without Demographics through Adversarially Reweighted Learning

In this work we address this problem by proposing Adversarially Reweighted Learning (ARL).

63

Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model

In this work, we tackle these two problems separately, by explicitly learning latent representations that can accelerate reinforcement learning from images.

64

Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

In this paper, we present a different approach. Rather than following the gradient, which corresponds to a locally greedy direction, we instead follow the eigenvectors of the Hessian.

65

The route to chaos in routing games: When is price of anarchy too optimistic?

We study MWU using the actual game costs without applying cost normalization to $[0,1]$.

66

Online Algorithm for Unsupervised Sequential Selection with Contextual Information

In this paper, we study Contextual Unsupervised Sequential Selection (USS), a new variant of the stochastic contextual bandits problem where the loss of an arm cannot be inferred from the observed feedback.

67

Adapting Neural Architectures Between Domains

This paper aims to improve the generalization of neural architectures via domain adaptation.

68

What went wrong and when?\\ Instance-wise feature importance for time-series black-box models

We propose FIT, a framework that evaluates the importance of observations for a multivariate time-series black-box model by quantifying the shift in the predictive distribution over time.

69

Towards Better Generalization of Adaptive Gradient Methods

To close this gap, we propose \textit{\textbf{S}table \textbf{A}daptive \textbf{G}radient \textbf{D}escent} (\textsc{SAGD}) for nonconvex optimization which leverages differential privacy to boost the generalization performance of adaptive gradient methods.

70

Learning Guidance Rewards with Trajectory-space Smoothing

This paper is in the same vein — starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards.

71

Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization

In this paper, we introduce a simplified and unified method for finite-sum convex optimization, named \emph{Variance Reduction via Accelerated Dual Averaging (VRADA)}.

72

Tree! I am no Tree! I am a low dimensional Hyperbolic Embedding

In this paper, we explore a new method for learning hyperbolic representations by taking a metric-first approach.

73

Deep Structural Causal Models for Tractable Counterfactual Inference

We formulate a general framework for building structural causal models (SCMs) with deep learning components.

74

Convolutional Generation of Textured 3D Meshes

A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.

75

A Statistical Framework for Low-bitwidth Training of Deep Neural Networks

In this paper, we address this problem by presenting a statistical framework for analyzing FQT algorithms.

76

Better Set Representations For Relational Reasoning

To resolve this limitation, we propose a simple and general network module called Set Refiner Network (SRN).

77

AutoSync: Learning to Synchronize for Data-Parallel Distributed Deep Learning

In this paper, we develop a model- and resource-dependent representation for synchronization, which unifies multiple synchronization aspects ranging from architecture, message partitioning, placement scheme, to communication topology.

78

A Combinatorial Perspective on Transfer Learning

In this work we study how the learning of modular solutions can allow for effective generalization to both unseen and potentially differently distributed data.

79

Hardness of Learning Neural Networks with Natural Weights

We prove negative results in this regard, and show that for depth-$2$ networks, and many “natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn.

80

Higher-Order Spectral Clustering of Directed Graphs

Based on the Hermitian matrix representation of digraphs, we present a nearly-linear time algorithm for digraph clustering, and further show that our proposed algorithm can be implemented in sublinear time under reasonable assumptions.

81

Primal-Dual Mesh Convolutional Neural Networks

We propose a method that combines the advantages of both types of approaches, while addressing their limitations: we extend a primal-dual framework drawn from the graph-neural-network literature to triangle meshes, and define convolutions on two types of graphs constructed from an input mesh.

82

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning

We address this limitation by conditional meta-learning, inferring a conditioning function mapping task’s side information into a meta-parameter vector that is appropriate for that task at hand.

83

Watch out! Motion is Blurring the Vision of Your Deep Neural Networks

We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA).

84

Sinkhorn Barycenter via Functional Gradient Descent

In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.

85

Coresets for Near-Convex Functions

We suggest a generic framework for computing sensitivities (and thus coresets) for wide family of loss functions which we call near-convex functions.

86

Bayesian Deep Ensembles via the Neural Tangent Kernel

We introduce a simple modification to standard deep ensembles training, through addition of a computationally-tractable, randomised and untrainable function to each ensemble member, that enables a posterior interpretation in the infinite width limit.

87

Improved Schemes for Episodic Memory-based Lifelong Learning

In this paper, we provide the first unified view of episodic memory based approaches from an optimization’s perspective.

88

Adaptive Sampling for Stochastic Risk-Averse Learning

We propose an adaptive sampling algorithm for stochastically optimizing the {\em Conditional Value-at-Risk (CVaR)} of a loss distribution, which measures its performance on the $\alpha$ fraction of most difficult examples.

89

Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring

We present a simple and effective approach for non-blind image deblurring, combining classical techniques and deep learning.

90

Discovering Reinforcement Learning Algorithms

This paper introduces a new meta-learning approach that discovers an entire update rule which includes both what to predict’ (e.g. value functions) and how to learn from it’ (e.g. bootstrapping) by interacting with a set of environments.

91

Taming Discrete Integration via the Boon of Dimensionality

The key contribution of this work addresses this scalability challenge via an efficient reduction of discrete integration to model counting.

92

Blind Video Temporal Consistency via Deep Video Prior

To address this issue, we present a novel and general approach for blind video temporal consistency.

93

Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering

In this paper, we ?rst provide a novel understanding of negative instances by empirically observing that only a few instances are potentially important for model learning, and false negatives tend to have stable predictions over many training iterations.

94

Model Selection for Production System via Automated Online Experiments

We propose an automated online experimentation mechanism that can efficiently perform model selection from a large pool of models with a small number of online experiments.

95

On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems

In this paper, we analyze the trajectories of stochastic gradient descent (SGD) with the aim of understanding their convergence properties in non-convex problems.

96

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

In this paper, we develop an automatic framework to enable perturbation analysis on any neural network structures, by generalizing existing LiRPA algorithms such as CROWN to operate on general computational graphs.

97

Adaptation Properties Allow Identification of Optimized Neural Codes

Here we solve an inverse problem: characterizing the objective and constraint functions that efficient codes appear to be optimal for, on the basis of how they adapt to different stimulus distributions.

98

Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems

In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-{\L}ojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate.

99

Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity

In this paper, we aim to address the fundamental open question about the sample complexity of model-based MARL.

100

Conservative Q-Learning for Offline Reinforcement Learning

In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value.

101

Online Influence Maximization under Linear Threshold Model

In this paper, we address OIM in the linear threshold (LT) model.

102

Ensembling geophysical models with Bayesian Neural Networks

We develop a novel data-driven ensembling strategy for combining geophysical models using Bayesian Neural Networks, which infers spatiotemporally varying model weights and bias while accounting for heteroscedastic uncertainties in the observations.

103

Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation

In this paper, we take attempt to incorporate the cyclic mechanism with the vision task of semi-supervised video object segmentation.

104

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

We introduce a less restrictive framework, Asymmetric Shapley values (ASVs), which are rigorously founded on a set of axioms, applicable to any AI system, and can flexibly incorporate any causal structure known to be respected by the data.

105

Understanding Deep Architecture with Reasoning Layer

In this paper, we take an initial step toward an understanding of such hybrid deep architectures by showing that properties of the algorithm layers, such as convergence, stability and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model.

106

Planning in Markov Decision Processes with Gap-Dependent Sample Complexity

We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support.

107

Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration

We show that using \emph{pessimistic value estimates} in the low-data regions in Bellman optimality and evaluation back-up can yield more adaptive and stronger guarantees when the concentrability assumption does not hold.

108

Detection as Regression: Certified Object Detection with Median Smoothing

This work is motivated by recent progress on certified classification by randomized smoothing. We start by presenting a reduction from object detection to a regression problem.

109

Contextual Reserve Price Optimization in Auctions via Mixed Integer Programming

We study the problem of learning a linear model to set the reserve price in an auction, given contextual information, in order to maximize expected revenue from the seller side.

110

ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks

We introduce an approach to training a given compact network.

111

FleXOR: Trainable Fractional Quantization

In this paper, we propose an encryption algorithm/architecture to compress quantized weights so as to achieve fractional numbers of bits per weight.

112

The Implications of Local Correlation on Learning Some Deep Functions

We introduce a property of distributions, denoted “local correlation”, which requires that small patches of the input image and of intermediate layers of the target function are correlated to the target label.

113

Learning to search efficiently for causally near-optimal treatments

We formalize this problem as learning a policy for finding a near-optimal treatment in a minimum number of trials using a causal inference framework.

114

A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses

In this paper, we propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium.

115

Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts

In this work we propose the Posterior Network (PostNet), which uses Normalizing Flows to predict an individual closed-form posterior distribution over predicted probabilites for any input sample.

116

Recurrent Quantum Neural Networks

In this work we construct the first quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks such as sequence learning and integer digit classification.

117

No-Regret Learning and Mixed Nash Equilibria: They Do Not Mix

In this paper, we study the dynamics of follow the regularized leader (FTRL), arguably the most well-studied class of no-regret dynamics, and we establish a sweeping negative result showing that the notion of mixed Nash equilibrium is antithetical to no-regret learning.

118

A Unifying View of Optimism in Episodic Reinforcement Learning

In this paper we provide a general framework for designing, analyzing and implementing such algorithms in the episodic reinforcement learning problem.

119

Continuous Submodular Maximization: Beyond DR-Submodularity

In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint.

120

An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits

In this paper, we follow recent approaches of deriving asymptotically optimal algorithms from problem-dependent regret lower bounds and we introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.

121

Assessing SATNet's Ability to Solve the Symbol Grounding Problem

In this paper, we clarify SATNet’s capabilities by showing that in the absence of intermediate labels that identify individual Sudoku digit images with their logical representations, SATNet completely fails at visual Sudoku (0% test accuracy).

122

A Bayesian Nonparametrics View into Deep Representations

We investigate neural network representations from a probabilistic perspective.

123

On the Similarity between the Laplace and Neural Tangent Kernels

Here we show that NTK for fully connected networks with ReLU activation is closely related to the standard Laplace kernel.

124

A causal view of compositional zero-shot recognition

Here we describe an approach for compositional generalization that builds on causal ideas.

125

HiPPO: Recurrent Memory with Optimal Polynomial Projections

We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto polynomial bases.

126

Auto Learning Attention

In this paper, we devise an Auto Learning Attention (AutoLA) method, which is the first attempt on automatic attention design.

127

CASTLE: Regularization via Auxiliary Causal Graph Discovery

We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.

128

Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect

In this paper, we establish a causal inference framework, which not only unravels the whys of previous methods, but also derives a new principled solution.

129

Explainable Voting

We prove, however, that outcomes of the important Borda rule can be explained using O(m^2) steps, where m is the number of alternatives.

130

Deep Archimedean Copulas

In this paper, we introduce ACNet, a novel differentiable neural network architecture that enforces structural properties and enables one to learn an important class of copulas–Archimedean Copulas.

131

Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization

In this paper, we identify several crucial issues and misconceptions about the use of linear embeddings for BO.

132

UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging

In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping.

133

Thunder: a Fast Coordinate Selection Solver for Sparse Learning

In this paper, we propose a novel active incremental approach to further improve the efficiency of the solvers.

134

Neural Networks Fail to Learn Periodic Functions and How to Fix It

As a fix of this problem, we propose a new activation, namely, $x + \sin^2(x)$, which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the $\relu$-based activations.

135

Distribution Matching for Crowd Counting

In this paper, we show that imposing Gaussians to annotations hurts generalization performance.

136

Correspondence learning via linearly-invariant embedding

In this paper, we propose a fully differentiable pipeline for estimating accurate dense correspondences between 3D point clouds.

137

Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning

In this paper, we propose to automatically learn PDRs via an end-to-end deep reinforcement learning agent.

138

On Adaptive Attacks to Adversarial Example Defenses

While prior evaluation papers focused mainly on the end result—showing that a defense was ineffective—this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack.

139

Sinkhorn Natural Gradient for Generative Models

In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.

140

Online Sinkhorn: Optimal Transport distances from sample streams

This paper introduces a new online estimator of entropy-regularized OT distances between two such arbitrary distributions.

141

Ultrahyperbolic Representation Learning

In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature.

142

Locally-Adaptive Nonparametric Online Learning

We fill this gap by introducing efficient online algorithms (based on a single versatile master algorithm) each adapting to one of the following regularities: (i) local Lipschitzness of the competitor function, (ii) local metric dimension of the instance sequence, (iii) local performance of the predictor across different regions of the instance space.

143

Compositional Generalization via Neural-Symbolic Stack Machines

To tackle this issue, we propose the Neural-Symbolic Stack Machine (NeSS).

144

Graphon Neural Networks and the Transferability of Graph Neural Networks

In this paper we introduce graphon NNs as limit objects of GNNs and prove a bound on the difference between the output of a GNN and its limit graphon-NN.

145

Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms

We study the structure of regret-minimizing policies in the {\em many-armed} Bayesian multi-armed bandit problem: in particular, with $k$ the number of arms and $T$ the time horizon, we consider the case where $k \geq \sqrt{T}$.

146

Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction

We introduce the gamma-model, a predictive model of environment dynamics with an infinite, probabilistic horizon.

147

Deep Transformers with Latent Depth

We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection.

148

Neural Mesh Flow: 3D Manifold Mesh Generation via Diffeomorphic Flows

In this work, we propose NeuralMeshFlow (NMF) to generate two-manifold meshes for genus-0 shapes.

149

Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task Lasso

To deal with this, we adapt the desparsified Lasso estimator —an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions— to temporal data corrupted with autocorrelated noise.

150

A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees

In this paper, we propose a novel MIP formulation, based on 1-norm support vector machine model, to train a binary oblique ODT for classification problems.

151

Efficient Exact Verification of Binarized Neural Networks

We present a new system, EEV, for efficient and exact verification of BNNs.

152

Ultra-Low Precision 4-bit Training of Deep Neural Networks

In this paper, we propose a number of novel techniques and numerical representation formats that enable, for the very first time, the precision of training systems to be aggressively scaled from 8-bits to 4-bits.

153

Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS

In this work, we propose BONAS (Bayesian Optimized Neural Architecture Search), a sample-based NAS framework which is accelerated using weight-sharing to evaluate multiple related architectures simultaneously.

154

On Numerosity of Deep Neural Networks

Recently, a provocative claim was published that number sense spontaneously emerges in a deep neural network trained merely for visual object recognition. This has, if true, far reaching significance to the fields of machine learning and cognitive science alike. In this paper, we prove the above claim to be unfortunately incorrect.

155

Outlier Robust Mean Estimation with Subgaussian Rates via Stability

We study the problem of outlier robust high-dimensional mean estimation under a bounded covariance assumption, and more broadly under bounded low-degree moment assumptions.

156

Self-Supervised Relationship Probing

In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations.

157

Information Theoretic Counterfactual Learning from Missing-Not-At-Random Feedback

To circumvent the use of RCTs, we build an information theoretic counterfactual variational information bottleneck (CVIB), as an alternative for debiasing learning without RCTs.

158

Prophet Attention: Predicting Attention with Future Attention

In this paper, we propose the Prophet Attention, similar to the form of self-supervision.

159

Language Models are Few-Shot Learners

Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.

160

Margins are Insufficient for Explaining Gradient Boosting

In this work, we first demonstrate that the k’th margin bound is inadequate in explaining the performance of state-of-the-art gradient boosters. We then explain the short comings of the k’th margin bound and prove a stronger and more refined margin-based generalization bound that indeed succeeds in explaining the performance of modern gradient boosters.

161

Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics

To address these shortcomings, we propose a novel attribution prior, where the Fourier transform of input-level attribution scores are computed at training-time, and high-frequency components of the Fourier spectrum are penalized.

162

MomentumRNN: Integrating Momentum into Recurrent Neural Networks

We theoretically prove and numerically demonstrate that MomentumRNNs alleviate the vanishing gradient issue in training RNNs.

163

Marginal Utility for Planning in Continuous or Large Discrete Action Spaces

In this paper we explore explicitly learning a candidate action generator by optimizing a novel objective, marginal utility.

164

Projected Stein Variational Gradient Descent

In this work, we propose a {projected Stein variational gradient descent} (pSVGD) method to overcome this challenge by exploiting the fundamental property of intrinsic low dimensionality of the data informed subspace stemming from ill-posedness of such problems.

165

Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks

In this paper we develop a statistical minimax framework to characterize the fundamental limits of transfer learning in the context of regression with linear and one-hidden layer neural network models.

166

SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point-clouds, which is equivariant under continuous 3D roto-translations.

167

On the equivalence of molecular graph convolution and molecular wave function with poor basis set

In this study, we demonstrate that the linear combination of atomic orbitals (LCAO), an approximation introduced by Pauling and Lennard-Jones in the 1920s, corresponds to graph convolutional networks (GCNs) for molecules.

168

The Power of Predictions in Online Control

We study the impact of predictions in online Linear Quadratic Regulator control with both stochastic and adversarial disturbances in the dynamics.

169

Learning Affordance Landscapes for Interaction Exploration in 3D Environments

We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen).

170

Cooperative Multi-player Bandit Optimization

We design a distributed learning algorithm that overcomes the informational bias players have towards maximizing the rewards of nearby players they got more information about.

171

Tight First- and Second-Order Regret Bounds for Adversarial Linear Bandits

We propose novel algorithms with first- and second-order regret bounds for adversarial linear bandits.

172

Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout

We present Gradient Sign Dropout (GradDrop), a probabilistic masking procedure which samples gradients at an activation layer based on their level of consistency.

173

A Loss Function for Generative Neural Networks Based on Watson?s Perceptual Model

We propose such a loss function based on Watson’s perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking.

174

Dynamic Fusion of Eye Movement Data and Verbal Narrations in Knowledge-rich Domains

We propose to jointly analyze experts’ eye movements and verbal narrations to discover important and interpretable knowledge patterns to better understand their decision-making processes.

175

Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward

In this paper, we identify a rich class of networked MARL problems where the model exhibits a local dependence structure that allows it to be solved in a scalable manner.

176

Optimizing Neural Networks via Koopman Operator Theory

Koopman operator theory, a powerful framework for discovering the underlying dynamics of nonlinear dynamical systems, was recently shown to be intimately connected with neural network training. In this work, we take the first steps in making use of this connection.

177

SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence

We introduce a new perspective on SVGD that instead views SVGD as the kernelized gradient flow of the chi-squared divergence.

178

Adversarial Robustness of Supervised Sparse Coding

In this work, we strike a better balance by considering a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate.

179

Differentiable Meta-Learning of Bandit Policies

In this work, we learn such policies for an unknown distribution P using samples from P.

180

Biologically Inspired Mechanisms for Adversarial Robustness

In this work, we investigate the role of two biologically plausible mechanisms in adversarial robustness.

181

Statistical-Query Lower Bounds via Functional Gradients

For the specific problem of ReLU regression (equivalently, agnostically learning a ReLU), we show that any statistical-query algorithm with tolerance $n^{-(1/\epsilon)^b}$ must use at least $2^{n^c} \epsilon$ queries for some constants $b, c > 0$, where $n$ is the dimension and $\epsilon$ is the accuracy parameter.

182

Near-Optimal Reinforcement Learning with Self-Play

This paper closes this gap for the first time: we propose an optimistic variant of the Nash Q-learning algorithm with sample complexity \tlO(SAB), and a new Nash V-learning algorithm with sample complexity \tlO(S(A+B)).

183

Network Diffusions via Neural Mean-Field Dynamics

We propose a novel learning framework based on neural mean-field dynamics for inference and estimation problems of diffusion on networks.

184

Self-Distillation as Instance-Specific Label Smoothing

With this in mind, we offer a new interpretation for teacher-student training as amortized MAP estimation, such that teacher predictions enable instance-specific regularization.

185

Towards Problem-dependent Optimal Learning Rates

In this paper we propose a new framework based on a "uniform localized convergence" principle.

186

Cross-lingual Retrieval for Iterative Self-Supervised Training

In this work, we found that the cross-lingual alignment can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs.

187

Rethinking pooling in graph neural networks

In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph.

188

Pointer Graph Networks

Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model generalisation ability.

189

Gradient Regularized V-Learning for Dynamic Treatment Regimes

In this paper, we introduce Gradient Regularized V-learning (GRV), a novel method for estimating the value function of a DTR.

190

Faster Wasserstein Distance Estimation with the Sinkhorn Divergence

In this work, we propose instead to estimate it with the Sinkhorn divergence, which is also built on entropic regularization but includes debiasing terms.

191

Forethought and Hindsight in Credit Assignment

We address the problem of credit assignment in reinforcement learning and explore fundamental questions regarding the way in which an agent can best use additional computation to propagate new information, by planning with internal models of the world to improve its predictions.

192

Robust Recursive Partitioning for Heterogeneous Treatment Effects with Uncertainty Quantification

This paper develops a new method for subgroup analysis, R2P, that addresses all these weaknesses.

193

Rescuing neural spike train models from bad MLE

To alleviate this, we propose to directly minimize the divergence between neural recorded and model generated spike trains using spike train kernels.

194

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

In this work, we consider the optimization formulation of personalized federated learning recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative explanation to the workings of local SGD methods.

195

Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework

We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified \functional optimization perspective.

196

Deep Imitation Learning for Bimanual Robotic Manipulation

We present a deep imitation learning framework for robotic bimanual manipulation in a continuous state-action space.

197

Stationary Activations for Uncertainty Calibration in Deep Learning

We introduce a new family of non-linear neural network activation functions that mimic the properties induced by the widely-used Mat\’ern family of kernels in Gaussian process (GP) models.

198

Ensemble Distillation for Robust Model Fusion in Federated Learning

In this work we investigate more powerful and more flexible aggregation schemes for FL.

199

Falcon: Fast Spectral Inference on Encrypted Data

In this paper, we propose a fast, frequency-domain deep neural network called Falcon, for fast inferences on encrypted data.

200

On Power Laws in Deep Ensembles

In this work, we focus on a classification problem and investigate the behavior of both non-calibrated and calibrated negative log-likelihood (CNLL) of a deep ensemble as a function of the ensemble size and the member network size.

201

Practical Quasi-Newton Methods for Training Deep Neural Networks

We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs).

202

Approximation Based Variance Reduction for Reparameterization Gradients

In this work we present a control variate that is applicable for any reparameterizable distribution with known mean and covariance, e.g. Gaussians with any covariance structure.

203

Inference Stage Optimization for Cross-scenario 3D Human Pose Estimation

In this work, we propose a novel framework, Inference Stage Optimization (ISO), for improving the generalizability of 3D pose models when source and target data come from different pose distributions.

204

Consistent feature selection for analytic deep neural networks

In this work, we investigate the problem of feature selection for analytic deep networks.

205

Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification

Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification by processing a sequence of relatively small inputs, which are strategically selected from the original image with reinforcement learning.

206

Information Maximization for Few-Shot Learning

We introduce Transductive Infomation Maximization (TIM) for few-shot learning.

207

Inverse Reinforcement Learning from a Gradient-based Learner

In this paper, we propose a new algorithm for this setting, in which the goal is to recover the reward function being optimized by an agent, given a sequence of policies produced during learning.

208

Bayesian Multi-type Mean Field Multi-agent Imitation Learning

In this paper, we proposed Bayesian multi-type mean field multi-agent imitation learning (BM3IL).

209

Bayesian Robust Optimization for Imitation Learning

To provide a bridge between these two extremes, we propose Bayesian Robust Optimization for Imitation Learning (BROIL).

210

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

In this work we address the challenging problem of multiview 3D surface reconstruction.

211

Riemannian Continuous Normalizing Flows

To overcome this problem, we introduce Riemannian continuous normalizing flows, a model which admits the parametrization of flexible probability measures on smooth manifolds by defining flows as the solution to ordinary differential equations.

212

Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

We demonstrate a biologically plausible reinforcement learning scheme for deep networks with an arbitrary number of layers.

213

Asymptotic Guarantees for Generative Modeling Based on the Smooth Wasserstein Distance

In this work, we conduct a thorough statistical study of the minimum smooth Wasserstein estimators (MSWEs), first proving the estimator’s measurability and asymptotic consistency.

214

Online Robust Regression via SGD on the l1 loss

In contrast, we show in this work that stochastic gradient descent on the l1 loss converges to the true parameter vector at a $\tilde{O}( 1 / (1 – \eta)^2 n )$ rate which is independent of the values of the contaminated measurements.

215

PRANK: motion Prediction based on RANKing

In this paper, we introduce the PRANK method, which satisfies these requirements.

216

Fighting Copycat Agents in Behavioral Cloning from Observation Histories

To combat this "copycat problem", we propose an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate, while retaining the information necessary to predict the next action.

217

Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model

We analyze the convergence of single-pass, fixed step-size stochastic gradient descent on the least-square risk under this model.

218

Structured Prediction for Conditional Meta-Learning

In this work, we propose a new perspective on conditional meta-learning via structured prediction.

219

Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient

In this work, we close the gap and offer an exponential improvement to the over-parameterization requirement for the existence of lottery tickets.

220

The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes

This work proposes a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes.

221

Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function

This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors.

222

Identifying Learning Rules From Neural Network Observables

It is an open question as to what specific experimental measurements would need to be made to determine whether any given learning rule is operative in a real biological system. In this work, we take a "virtual experimental" approach to this problem.

223

Optimal Approximation – Smoothness Tradeoffs for Soft-Max Functions

Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness.

224

Weakly-Supervised Reinforcement Learning for Controllable Behavior

In this work, we introduce a framework for using weak supervision to automatically disentangle this semantically meaningful subspace of tasks from the enormous space of nonsensical "chaff" tasks.

225

Improving Policy-Constrained Kidney Exchange via Pre-Screening

We propose both a greedy heuristic and a Monte Carlo tree search, which outperforms previous approaches, using experiments on both synthetic data and real kidney exchange data from the United Network for Organ Sharing.

226

Learning abstract structure for drawing by efficient motor program induction

We show that people spontaneously learn abstract drawing procedures that support generalization, and propose a model of how learners can discover these reusable drawing procedures.

227

Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? — A Neural Tangent Kernel Perspective

This paper studies this fundamental problem in deep learning from a so-called neural tangent kernel” perspective.

228

Dual Instrumental Variable Regression

We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation.

229

Stochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes

In this paper, we focus on the Gaussian process (GP) and take a step forward towards breaking the barrier by proving minibatch SGD converges to a critical point of the full loss function, and recovers model hyperparameters with rate $O(\frac{1}{K})$ up to a statistical error term depending on the minibatch size.

230

Interventional Few-Shot Learning

Thanks to it, we propose a novel FSL paradigm: Interventional Few-Shot Learning (IFSL).

231

Minimax Value Interval for Off-Policy Evaluation and Policy Optimization

We study minimax methods for off-policy evaluation (OPE) using value functions and marginalized importance weights.

232

Biased Stochastic First-Order Methods for Conditional Stochastic Optimization and Applications in Meta Learning

For this special setting, we propose an accelerated algorithm called biased SpiderBoost (BSpiderBoost) that matches the lower bound complexity.

233

ShiftAddNet: A Hardware-Inspired Deep Network

This paper presented ShiftAddNet, whose main inspiration is drawn from a common practice in energy-efficient hardware implementation, that is, multiplication can be instead performed with additions and logical bit-shifts.

234

Network-to-Network Translation with Conditional Invertible Neural Networks

Therefore, we seek a model that can relate between different existing representations and propose to solve this task with a conditionally invertible network.

235

Intra-Processing Methods for Debiasing Neural Networks

In this work, we initiate the study of a new paradigm in debiasing research, intra-processing, which sits between in-processing and post-processing methods.

236

Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems

This paper proposes two efficient algorithms for computing approximate second-order stationary points (SOSPs) of problems with generic smooth non-convex objective functions and generic linear constraints.

237

Model-based Policy Optimization with Unsupervised Model Adaptation

In this paper, we investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.

238

Implicit Regularization and Convergence for Weight Normalization

Here, we study the weight normalization (WN) method \cite{salimans2016weight} and a variant called reparametrized projected gradient descent (rPGD) for overparametrized least squares regression and some more general loss functions.

239

Geometric All-way Boolean Tensor Decomposition

In this work, we presented a computationally efficient BTD algorithm, namely Geometric Expansion for all-order Tensor Factorization (GETF), that sequentially identifies the rank-1 basis components for a tensor from a geometric perspective.

240

Modular Meta-Learning with Shrinkage

Here, we propose a meta-learning approach that obviates the need for this often sub-optimal hand-selection.

241

A/B Testing in Dense Large-Scale Networks: Design and Inference

In this paper, we present a novel strategy for accurately estimating the causal effects of a class of treatments in a dense large-scale network.

242

What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation

In this work we design experiments to test the key ideas in this theory.

243

Partially View-aligned Clustering

In this paper, we study one challenging issue in multi-view data clustering.

244

Partial Optimal Tranport with applications on Positive-Unlabeled Learning

In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them.

245

Toward the Fundamental Limits of Imitation Learning

In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs).

246

Logarithmic Pruning is All You Need

In this work, we remove the most limiting assumptions of this previous work while providing significantly tighter bounds: the overparameterized network only needs a logarithmic factor (in all variables but depth) number of neurons per weight of the target subnetwork.

247

Hold me tight! Influence of discriminative features on deep network boundaries

In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.

248

Learning from Mixtures of Private and Public Populations

Inspired by the above example, we consider a model in which the population $\cD$ is a mixture of two possibly distinct sub-populations: a private sub-population $\Dprv$ of private and sensitive data, and a public sub-population $\Dpub$ of data with no privacy concerns.

249

Adversarial Weight Perturbation Helps Robust Generalization

In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap.

250

Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes

In this paper, a rather general online problem called \emph{dynamic resource allocation with capacity constraints (DRACC)} is introduced and studied in the realm of posted price mechanisms.

251

Adversarial Self-Supervised Contrastive Learning

In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.

252

Normalizing Kalman Filters for Multivariate Time Series Analysis

To this extent, we present a novel approach reconciling classical state space models with deep learning methods.

253

Learning to summarize with human feedback

In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences.

254

Fourier Spectrum Discrepancies in Deep Network Generated Images

In this paper, we present an analysis of the high-frequency Fourier modes of real and deep network generated images and show that deep network generated images share an observable, systematic shortcoming in replicating the attributes of these high-frequency modes.

255

Lamina-specific neuronal properties promote robust, stable signal propagation in feedforward networks

Specifically, we found that signal transformations, made by each layer of neurons on an input-driven spike signal, demodulate signal distortions introduced by preceding layers.

256

Learning Dynamic Belief Graphs to Generalize on Text-Based Games

In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.

257

Triple descent and the two kinds of overfitting: where & why do they appear?

In this paper, we show that despite their apparent similarity, these two scenarios are inherently different.

258

Multimodal Graph Networks for Compositional Generalization in Visual Question Answering

In this paper, we propose to tackle this challenge by employing neural factor graphs to induce a tighter coupling between concepts in different modalities (e.g. images and text).

259

Learning Graph Structure With A Finite-State Automaton Layer

In this work, we study the problem of learning to derive abstract relations from the intrinsic graph structure.

260

A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

This paper studies the universal approximation property of deep neural networks for representing probability distributions.

261

Unsupervised object-centric video generation and decomposition in 3D

We instead propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background.

262

Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization

In this paper, we introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.

263

Multi-label classification: do Hamming loss and subset accuracy really conflict with each other?

This paper provides an attempt to fill up this gap by analyzing the learning guarantees of the corresponding learning algorithms on both SA and HL measures.

264

A Novel Automated Curriculum Strategy to Solve Hard Sokoban Planning Instances

We present a novel {\em automated} curriculum approach that dynamically selects from a pool of unlabeled training instances of varying task complexity guided by our {\em difficulty quantum momentum} strategy.

265

Causal analysis of Covid-19 Spread in Germany

In this work, we study the causal relations among German regions in terms of the spread of Covid-19 since the beginning of the pandemic, taking into account the restriction policies that were applied by the different federal states.

266

Locally private non-asymptotic testing of discrete distributions is faster using interactive mechanisms

We find separation rates for testing multinomial or more general discrete distributions under the constraint of alpha-local differential privacy.

267

Adaptive Gradient Quantization for Data-Parallel SGD

We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ.

268

Finite Continuum-Armed Bandits

Focusing on a nonparametric setting, where the mean reward is an unknown function of a one-dimensional covariate, we propose an optimal strategy for this problem.

269

Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies

To alleviate this shortcoming, we propose a novel regularization term based on the functional entropy.

270

Compact task representations as a normative model for higher-order brain activity

More specifically, we focus on MDPs whose state is based on action-observation histories, and we show how to compress the state space such that unnecessary redundancy is eliminated, while task-relevant information is preserved.

271

Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs

We consider the problem of robust and adaptive model predictive control (MPC) of a linear system, with unknown parameters that are learned along the way (adaptive), in a critical setting where failures must be prevented (robust).

272

Co-exposure Maximization in Online Social Networks

In this paper, we study the problem of allocating seed users to opposing campaigns: by drawing on the equal-time rule of political campaigning on traditional media, our goal is to allocate seed users to campaigners with the aim to maximize the expected number of users who are co-exposed to both campaigns.

273

UCLID-Net: Single View Reconstruction in Object Space

In this paper, we show that building a geometry preserving 3-dimensional latent space helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space and, as a result, boosts performance.

274

Reinforcement Learning for Control with Multiple Frequencies

In this paper, we formalize the problem of multiple control frequencies in RL and provide its efficient solution method.

275

Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval

Here we focus on gradient flow dynamics for phase retrieval from random measurements.

276

Neural Message Passing for Multi-Relational Ordered and Recursive Hypergraphs

In this work, we first unify exisiting MPNNs on different structures into G-MPNN (Generalised MPNN) framework.

277

A Unified View of Label Shift Estimation

In this paper, we present a unified view of the two methods and the first theoretical characterization of MLLS.

278

Optimal Private Median Estimation under Minimal Distributional Assumptions

We study the fundamental task of estimating the median of an underlying distribution from a finite number of samples, under pure differential privacy constraints.

279

Breaking the Communication-Privacy-Accuracy Trilemma

In this paper, we develop novel encoding and decoding mechanisms that simultaneously achieve optimal privacy and communication efficiency in various canonical settings.

280

Audeo: Audio Generation for a Silent Performance Video

Our main aim in this work is to explore the plausibility of such a transformation and to identify cues and components able to carry the association of sounds with visual events.

281

Ode to an ODE

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

282

Self-Distillation Amplifies Regularization in Hilbert Space

This work provides the first theoretical analysis of self-distillation.

283

Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators

Without a universality, there could be a well-behaved invertible transformation that the CF-INN can never approximate, hence it would render the model class unreliable. We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases.

284

Community detection using fast low-cardinality semidefinite programming?

In this paper, we propose a new class of low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from max-k-cut.

285

Modeling Noisy Annotations for Crowd Counting

In this paper, we first model the annotation noise using a random variable with Gaussian distribution, and derive the pdf of the crowd density value for each spatial location in the image. We then approximate the joint distribution of the density values (i.e., the distribution of density maps) with a full covariance multivariate Gaussian density, and derive a low-rank approximate for tractable implementation.

286

An operator view of policy gradient methods

We cast policy gradient methods as the repeated application of two operators: a policy improvement operator $\mathcal{I}$, which maps any policy $\pi$ to a better one $\mathcal{I}\pi$, and a projection operator $\mathcal{P}$, which finds the best approximation of $\mathcal{I}\pi$ in the set of realizable policies.

287

Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases

Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it’s augmented versions as samples of a single class. In this work, we first present quantitative experiments to demystify these gains.

288

Online MAP Inference of Determinantal Point Processes

In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory.

289

Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement

This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS).

290

Inferring learning rules from animal decision-making

Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors.

291

Input-Aware Dynamic Backdoor Attack

In this work, we propose a novel backdoor attack technique in which the triggers vary from input to input.

292

How hard is to distinguish graphs with graph neural networks?

This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN).

293

Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition

In this paper, we show that $ T $-round switching-constrained OCO with fewer than $ K $ switches has a minimax regret of $ \Theta(\frac{T}{\sqrt{K}}) $.

294

Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

To partially answer this question, we consider the scenario when the manifold information of the underlying data is available.

295

Cross-Scale Internal Graph Neural Network for Image Super-Resolution

In this paper, we explore the cross-scale patch recurrence property of a natural image, i.e., similar patches tend to recur many times across different scales.

296

Unsupervised Representation Learning by Invariance Propagation

In this paper, we propose Invariance Propagation to focus on learning representations invariant to category-level variations, which are provided by different instances from the same category.

297

Restoring Negative Information in Few-Shot Object Detection

In this paper, we restore the negative information in few-shot object detection by introducing a new negative- and positive-representative based metric learning framework and a new inference scheme with negative and positive representatives.

298

Do Adversarially Robust ImageNet Models Transfer Better?

In this work, we identify another such aspect: we find that adversarially robust models, while less accurate, often perform better than their standard-trained counterparts when used for transfer learning.

299

Robust Correction of Sampling Bias using Cumulative Distribution Functions

We present a new method for handling covariate shift using the empirical cumulative distribution function estimates of the target distribution by a rigorous generalization of a recent idea proposed by Vapnik and Izmailov.

300

Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach

In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.

301

Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

In this paper, we propose to build the pixel-level cycle association between source and target pixel pairs and contrastively strengthen their connections to diminish the domain gap and make the features more discriminative.

302

Classification with Valid and Adaptive Coverage

In this paper, we develop specialized versions of these techniques for categorical and unordered response labels that, in addition to providing marginal coverage, are also fully adaptive to complex data distributions, in the sense that they perform favorably in terms of approximate conditional coverage compared to alternative methods.

303

Learning Global Transparent Models consistent with Local Contrastive Explanations

In this work, we explore the question: Can we produce a transparent global model that is simultaneously accurate and consistent with the local (contrastive) explanations of the black-box model?

304

Learning to Approximate a Bregman Divergence

In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations.

305

Diverse Image Captioning with Context-Object Split Latent Spaces

To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset.

306

Learning Disentangled Representations of Videos with Missing Data

We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data.

307

Natural Graph Networks

Here we show that instead of equivariance, the more general concept of naturality is sufficient for a graph network to be well-defined, opening up a larger class of graph networks.

308

Continual Learning with Node-Importance based Adaptive Group Sparse Regularization

We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties.

309

Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

In this work, we propose Learning@home: a novel neural network training paradigm designed to handle large amounts of poorly connected participants.

310

Bidirectional Convolutional Poisson Gamma Dynamical Systems

Incorporating the natural document-sentence-word structure into hierarchical Bayesian modeling, we propose convolutional Poisson gamma dynamical systems (PGDS) that introduce not only word-level probabilistic convolutions, but also sentence-level stochastic temporal transitions.

311

Deep Reinforcement and InfoMax Learning

To test that hypothesis, we introduce an objective based on Deep InfoMax (DIM) which trains the agent to predict the future by maximizing the mutual information between its internal representation of successive timesteps.

312

On ranking via sorting by estimated expected utility

We provide an answer to this question in the form of a structural characterization of ranking losses for which a suitable regression is consistent.

313

Distribution-free binary classification: prediction sets, confidence intervals and calibration

We study three notions of uncertainty quantification—calibration, confidence intervals and prediction sets—for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data.

314

Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow

In this paper, we introduce subset flows, a class of flows that can tractably transform finite volumes and thus allow exact computation of likelihoods for discrete data.

315

Sequence to Multi-Sequence Learning via Conditional Chain Mapping for Mixture Signals

In this work, we focus on one-to-many sequence transduction problems, such as extracting multiple sequential sources from a mixture sequence.

316

Variance reduction for Random Coordinate Descent-Langevin Monte Carlo

We show by a counterexamplethat blindly applying RCD does not achieve the goal in the most general setting.

317

Language as a Cognitive Tool to Imagine Goals in Curiosity Driven Exploration

We introduce IMAGINE, an intrinsically motivated deep reinforcement learning architecture that models this ability.

318

All Word Embeddings from One Embedding

In this study, to reduce the total number of parameters, the embeddings for all words are represented by transforming a shared embedding.

319

Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm

We consider the task of sampling with respect to a log concave probability distribution.

320

How to Characterize The Landscape of Overparameterized Convolutional Neural Networks

Specifically, we consider the loss landscape of an overparameterized convolutional neural network (CNN) in the continuous limit, where the numbers of channels/hidden nodes in the hidden layers go to infinity.

321

On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples

In this paper, we describe a geometric technique that determines whether this SDP certificate is exact, meaning whether it provides both a lower-bound on the size of the smallest adversarial perturbation, as well as a globally optimal perturbation that attains the lower-bound.

322

Submodular Meta-Learning

In this paper, we introduce a discrete variant of the Meta-learning framework.

323

Rethinking Pre-training and Self-training

Our study reveals the generality and flexibility of self-training with three additional insights: 1) stronger data augmentation and more labeled data further diminish the value of pre-training, 2) unlike pre-training, self-training is always helpful when using stronger data augmentation, in both low-data and high-data regimes, and 3) in the case that pre-training is helpful, self-training improves upon pre-training.

324

Unsupervised Sound Separation Using Mixture Invariant Training

In this paper, we propose a completely unsupervised method, mixture invariant training (MixIT), that requires only single-channel acoustic mixtures.

325

Adaptive Discretization for Model-Based Reinforcement Learning

We introduce the technique of adaptive discretization to design an efficient model-based episodic reinforcement learning algorithm in large (potentially continuous) state-action spaces.

326

CodeCMR: Cross-Modal Retrieval For Function-Level Binary Source Code Matching

This paper proposes an end-to-end cross-modal retrieval network for binary source code matching, which achieves higher accuracy and requires less expert experience.

327

On Warm-Starting Neural Network Training

In this work, we take a closer look at this empirical phenomenon and try to understand when and how it occurs.

328

DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

Informed by the KKT conditions, a local search post-processing algorithm is proposed and shown to substantially and universally improve the structural Hamming distance of all tested algorithms, typically by a factor of 2 or more.

329

OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection and Classification

We propose a few-shot learning method for detecting out-of-distribution (OOD) samples from classes that are unseen during training while classifying samples from seen classes using only a few labeled examples.

330

An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch

In this paper, we show that one existing solution to this transfer problem– grounded action transformation –is closely related to the problem of imitation from observation (IfO): learning behaviors that mimic the observations of behavior demonstrations.

331

Learning About Objects by Learning to Interact with Them

Taking inspiration from infants learning from their environment through play and interaction, we present a computational framework to discover objects and learn their physical properties along this paradigm of Learning from Interaction.

332

Learning discrete distributions with infinite support

We present a novel approach to estimating discrete distributions with (potentially) infinite support in the total variation metric.

333

Dissecting Neural ODEs

In this work we “open the box”, further developing the continuous-depth formulation with the aim of clarifying the influence of several design choices on the underlying dynamics.

334

Teaching a GAN What Not to Learn

In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, "The art of knowing is knowing what to ignore."

335

Counterfactual Data Augmentation using Locally Factored Dynamics

We propose an approach to inferring these structures given an object-oriented state representation, as well as a novel algorithm for Counterfactual Data Augmentation (CoDA).

336

Rethinking Learnable Tree Filter for Generic Feature Transform

To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term.

337

Self-Supervised Relational Reasoning for Representation Learning

In this work, we propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data.

338

Sufficient dimension reduction for classification using principal optimal transport direction

To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport.

339

Fast Epigraphical Projection-based Incremental Algorithms for Wasserstein Distributionally Robust Support Vector Machine

In this paper, we focus on a family of Wasserstein distributionally robust support vector machine (DRSVM) problems and propose two novel epigraphical projection-based incremental algorithms to solve them.

340

Differentially Private Clustering: Tight Approximation Ratios

For several basic clustering problems, including Euclidean DensestBall, 1-Cluster, k-means, and k-median, we give efficient differentially private algorithms that achieve essentially the same approximation ratios as those that can be obtained by any non-private algorithm, while incurring only small additive errors.

341

On the Power of Louvain in the Stochastic Block Model

We provide valuable tools for the analysis of Louvain, but also for many other combinatorial algorithms.

342

Fairness with Overlapping Groups; a Probabilistic Perspective

In algorithmically fair prediction problems, a standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously. We reconsider this standard fair classification problem using a probabilistic population analysis, which, in turn, reveals the Bayes-optimal classifier.

343

AttendLight: Universal Attention-Based Reinforcement Learning Model for Traffic Signal Control

We propose AttendLight, an end-to-end Reinforcement Learning (RL) algorithm for the problem of traffic signal control.

344

Searching for Low-Bit Weights in Quantized Neural Networks

Thus, we present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately.

345

Adaptive Reduced Rank Regression

To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem.

346

From Predictions to Decisions: Using Lookahead Regularization

For this, we introduce look-ahead regularization which, by anticipating user actions, encourages predictive models to also induce actions that improve outcomes.

347

Sequential Bayesian Experimental Design with Variable Cost Structure

We propose and demonstrate an algorithm that accounts for these variable costs in the refinement decision.

348

Predictive inference is free with the jackknife+-after-bootstrap

In this paper, we propose the jackknife+-after-bootstrap (J+aB), a procedure for constructing a predictive interval, which uses only the available bootstrapped samples and their corresponding fitted models, and is therefore "free" in terms of the cost of model fitting.

349

Counterfactual Predictions under Runtime Confounding

We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.

350

Learning Loss for Test-Time Augmentation

This paper proposes a novel instance-level test- time augmentation that efficiently selects suitable transformations for a test input.

351

Balanced Meta-Softmax for Long-Tailed Visual Recognition

In this paper, we show that the Softmax function, though used in most classification tasks, gives a biased gradient estimation under the long-tailed setup.

352

Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization

This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space.

353

MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning

This paper introduces MDP homomorphic networks for deep reinforcement learning.

354

How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods

We performed a cross-analysis Amazon Mechanical Turk study comparing the popular state-of-the-art explanation methods to empirically determine which are better in explaining model decisions.

355

On the Error Resistance of Hinge-Loss Minimization

In this work, we identify a set of conditions on the data under which such surrogate loss minimization algorithms provably learn the correct classifier.

356

Munchausen Reinforcement Learning

Our core contribution stands in a very simple idea: adding the scaled log-policy to the immediate reward.

357

Object Goal Navigation using Goal-Oriented Semantic Exploration

We propose a modular system called, `Goal-Oriented Semantic Exploration’ which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category.

358

Efficient semidefinite-programming-based inference for binary and multi-class MRFs

In this paper, we propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF by instead exploiting a recently proposed coordinate-descent-based fast semidefinite solver.

359

Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost.

360

Semantic Visual Navigation by Watching YouTube Videos

This paper learns and leverages such semantic cues for navigating to objects of interest in novel environments, by simply watching YouTube videos.

361

Heavy-tailed Representations, Text Polarity Classification & Data Augmentation

In this paper, we develop a novel method to learn a heavy-tailed embedding with desirable regularity properties regarding the distributional tails, which allows to analyze the points far away from the distribution bulk using the framework of multivariate extreme value theory.

362

SuperLoss: A Generic Loss for Robust Curriculum Learning

We propose instead a simple and generic method that can be applied to a variety of losses and tasks without any change in the learning procedure.

363

CogMol: Target-Specific and Selective Drug Design for COVID-19 Using Deep Generative Models

In this study, we propose an end-to-end framework, named CogMol (Controlled Generation of Molecules), for designing new drug-like small molecules targeting novel viral proteins with high affinity and off-target selectivity.

364

Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards

In this work, instead of focusing on good experiences with limited diversity, we propose to learn a trajectory-conditioned policy to follow and expand diverse past trajectories from a memory buffer.

365

Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations

We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks.

366

Improving Sample Complexity Bounds for (Natural) Actor-Critic Algorithms

In contrast, this paper characterizes the convergence rate and sample complexity of AC and NAC under Markovian sampling, with mini-batch data for each iteration, and with actor having general policy class approximation.

367

Learning Differential Equations that are Easy to Solve

We propose a remedy that encourages learned dynamics to be easier to solve.

368

Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses

Specifically, we provide sharp upper and lower bounds for several forms of SGD and full-batch GD on arbitrary Lipschitz nonsmooth convex losses.

369

Influence-Augmented Online Planning for Complex Environments

In this work, we propose influence-augmented online planning, a principled method to transform a factored simulator of the entire environment into a local simulator that samples only the state variables that are most relevant to the observation and reward of the planning agent and captures the incoming influence from the rest of the environment using machine learning methods.

370

PAC-Bayes Learning Bounds for Sample-Dependent Priors

We present a series of new PAC-Bayes learning guarantees for randomized algorithms with sample-dependent priors.

371

Reward-rational (implicit) choice: A unifying formalism for reward learning

Our key observation is that different types of behavior can be interpreted in a single unifying formalism – as a reward-rational choice that the human is making, often implicitly.

372

Probabilistic Time Series Forecasting with Shape and Temporal Diversity

In this paper, we address this problem for non-stationary time series, which is very challenging yet crucially important.

373

Low Distortion Block-Resampling with Spatially Stochastic Networks

We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent.

374

Continual Deep Learning by Functional Regularisation of Memorable Past

In this paper, we fix this issue by using a new functional-regularisation approach that utilises a few memorable past examples crucial to avoid forgetting.

375

Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning

Here we propose and mathematically analyze a general class of structure-related features, termed Distance Encoding (DE).

376

Fast Fourier Convolution

In this work, we propose a novel convolutional operator dubbed as fast Fourier convolution (FFC), which has the main hallmarks of non-local receptive fields and cross-scale fusion within the convolutional unit.

377

Unsupervised Learning of Dense Visual Representations

In this paper, we propose View-Agnostic Dense Representation (VADeR) for unsupervised learning of dense representations.

378

Higher-Order Certification For Randomized Smoothing

In this work, we propose a framework to improve the certified safety region for these smoothed classifiers without changing the underlying smoothing scheme.

379

Learning Structured Distributions From Untrusted Batches: Faster and Simpler

In this paper, we find an appealing way to synthesize the techniques of [JO19] and [CLM19] to give the best of both worlds: an algorithm which runs in polynomial time and can exploit structure in the underlying distribution to achieve sublinear sample complexity.

380

Hierarchical Quantized Autoencoders

This leads us to introduce a novel objective for training hierarchical VQ-VAEs.

381

Diversity can be Transferred: Output Diversification for White- and Black-box Attacks

To improve the efficiency of these attacks, we propose Output Diversified Sampling (ODS), a novel sampling strategy that attempts to maximize diversity in the target model’s outputs among the generated samples.

382

POLY-HOOT: Monte-Carlo Planning in Continuous Space MDPs with Non-Asymptotic Analysis

In this paper, we consider Monte-Carlo planning in an environment with continuous state-action spaces, a much less understood problem with important applications in control and robotics.

383

AvE: Assistance via Empowerment

We propose a new paradigm for assistance by instead increasing the human’s ability to control their environment, and formalize this approach by augmenting reinforcement learning with human empowerment.

384

Variational Policy Gradient Method for Reinforcement Learning with General Utilities

In this paper, we consider policy optimization in Markov Decision Problems, where the objective is a general utility function of the state-action occupancy measure, which subsumes several of the aforementioned examples as special cases.

385

Reverse-engineering recurrent neural network solutions to a hierarchical inference task for mice

We study how recurrent neural networks (RNNs) solve a hierarchical inference task involving two latent variables and disparate timescales separated by 1-2 orders of magnitude.

386

Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation

We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction.

387

Efficient Low Rank Gaussian Variational Inference for Neural Networks

By using a new form of the reparametrization trick, we derive a computationally efficient algorithm for performing VI with a Gaussian family with a low-rank plus diagonal covariance structure.

388

Privacy Amplification via Random Check-Ins

In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients).

389

Probabilistic Circuits for Variational Inference in Discrete Graphical Models

In this paper, we propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN), to compute ELBO gradients exactly (without sampling) for a certain class of densities.

390

Your Classifier can Secretly Suffice Multi-Source Domain Adaptation

In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.

391

Labelling unlabelled videos from scratch with multi-modal self-supervision

In this work, we a) show that unsupervised labelling of a video dataset does not come for free from strong feature encoders and b) propose a novel clustering method that allows pseudo-labelling of a video dataset without any human annotations, by leveraging the natural correspondence between audio and visual modalities.

392

A Non-Asymptotic Analysis for Stein Variational Gradient Descent

In this paper, we provide a novel finite time analysis for the SVGD algorithm.

393

Robust Meta-learning for Mixed Linear Regression with Small Batches

We introduce a spectral approach that is simultaneously robust under both scenarios.

394

Bayesian Deep Learning and a Probabilistic Perspective of Generalization

We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead.

395

Unsupervised Learning of Object Landmarks via Self-Training Correspondence

This paper addresses the problem of unsupervised discovery of object landmarks.

396

Randomized tests for high-dimensional regression: A more efficient and powerful solution

In this paper, we answer this question in the affirmative by leveraging the random projection techniques, and propose a testing procedure that blends the classical $F$-test with a random projection step.

397

Learning Representations from Audio-Visual Spatial Alignment

We introduce a novel self-supervised pretext task for learning representations from audio-visual content.

398

Generative View Synthesis: From Single-view Semantics to Novel-view Images

We propose to push the envelope further, and introduce Generative View Synthesis (GVS) that can synthesize multiple photorealistic views of a scene given a single semantic map.

399

Towards More Practical Adversarial Attacks on Graph Neural Networks

Therefore, we propose a greedy procedure to correct the importance score that takes into account of the diminishing-return pattern.

400

Multi-Task Reinforcement Learning with Soft Modularization

Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue.

401

Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

In this paper, we propose a novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption.

402

On the training dynamics of deep networks with $L_2$ regularization

We study the role of $L_2$ regularization in deep learning, and uncover simple relations between the performance of the model, the $L_2$ coefficient, the learning rate, and the number of training steps.

403

Improved Algorithms for Convex-Concave Minimax Optimization

This paper studies minimax optimization problems minx maxy f(x, y), where f(x, y) is mx-strongly convex with respect to x, my-strongly concave with respect to y and (Lx, Lxy, Ly)-smooth.

404

Deep Variational Instance Segmentation

In this paper, we propose a novel algorithm that directly utilizes a fully convolutional network (FCN) to predict instance labels.

405

Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence

The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner.

406

Deep Multimodal Fusion by Channel Exchanging

To this end, this paper proposes Channel-Exchanging-Network (CEN), a parameter-free multimodal fusion framework that dynamically exchanges channels between sub-networks of different modalities.

407

Hierarchically Organized Latent Modules for Exploratory Search in Morphogenetic Systems

In this paper, we motivate the need for what we call Meta-diversity search, arguing that there is not a unique ground truth interesting diversity as it strongly depends on the final observer and its motives.

408

AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity

We present an improved method for symbolic regression that seeks to fit data to formulas that are Pareto-optimal, in the sense of having the best accuracy for a given complexity.

409

Delay and Cooperation in Nonstochastic Linear Bandits

This paper offers a nearly optimal algorithm for online linear optimization with delayed bandit feedback.

410

Probabilistic Orientation Estimation with Matrix Fisher Distributions

This paper focuses on estimating probability distributions over the set of 3D ro- tations (SO(3)) using deep neural networks.

411

Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons

Overall, we present a novel normative modeling approach for spiking E-I networks, going beyond the widely-used energy-minimizing networks that violate Dale’s law.

412

Telescoping Density-Ratio Estimation

To resolve this limitation, we introduce a new framework, telescoping density-ratio estimation (TRE), that enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces.

413

Towards Deeper Graph Neural Networks with Differentiable Group Normalization

To bridge the gap, we introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN).

414

Stochastic Optimization for Performative Prediction

We initiate the study of stochastic optimization for performative prediction.

415

Learning Differentiable Programs with Admissible Neural Heuristics

We study the problem of learning differentiable functions expressed as programs in a domain-specific language.

416

Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method

We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees which go beyond the standard worst-case analysis.

417

Domain Adaptation as a Problem of Inference on Graphical Models

To develop an automated way of domain adaptation with multiple source domains, we propose to use a graphical model as a compact way to encode the change property of the joint distribution, which can be learned from data, and then view domain adaptation as a problem of Bayesian inference on the graphical models.

418

Network size and size of the weights in memorization with two-layers neural networks

In contrast we propose a new training procedure for ReLU networks, based on {\em complex} (as opposed to {\em real}) recombination of the neurons, for which we show approximate memorization with both $O\left(\frac{n}{d} \cdot \frac{\log(1/\epsilon)}{\epsilon}\right)$ neurons, as well as nearly-optimal size of the weights.

419

Certifying Strategyproof Auction Networks

We propose ways to explicitly verify strategyproofness under a particular valuation profile using techniques from the neural network verification literature.

420

Continual Learning of Control Primitives : Skill Discovery via Reset-Games

In this work, we show how a single method can allow an agent to acquire skills with minimal supervision while removing the need for resets.

421

HOI Analysis: Integrating and Decomposing Human-Object Interaction

In analogy to Harmonic Analysis, whose goal is to study how to represent the signals with the superposition of basic waves, we propose the HOI Analysis.

422

Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering

In this paper, we propose a generalization of the objective function behind these methods involving p-norms.

423

Deep Direct Likelihood Knockoffs

We develop Deep Direct Likelihood Knockoffs (DDLK), which directly minimizes the KL divergence implied by the knockoff swap property.

424

Meta-Neighborhoods

In this work, we step forward in this direction and propose a semi-parametric method, Meta-Neighborhoods, where predictions are made adaptively to the neighborhood of the input.

425

Neural Dynamic Policies for End-to-End Sensorimotor Learning

In this work, we begin to close this gap and embed dynamics structure into deep neural network-based policies by reparameterizing action spaces with differential equations.

426

A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

Here, we develop a two-step inference strategy that allows us to train robust generalised linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step.

427

Decision-Making with Auto-Encoding Variational Bayes

Motivated by these theoretical results, we propose learning several approximate proposals for the best model and combining them using multiple importance sampling for decision-making.

428

Attribution Preservation in Network Compression for Reliable Network Interpretation

In this paper, we show that these seemingly unrelated techniques conflict with each other as network compression deforms the produced attributions, which could lead to dire consequences for mission-critical applications.

429

Feature Importance Ranking for Deep Learning

In this paper, we propose a novel dual-net architecture consisting of operator and selector for discovery of an optimal feature subset of a fixed size and ranking the importance of those features in the optimal subset simultaneously.

430

Causal Estimation with Functional Confounders

We study causal inference when the true confounder value can be expressed as a function of the observed data; we call this setting estimation with functional confounders (EFC).

431

Model Inversion Networks for Model-Based Optimization

We propose to address such problems with model inversion networks (MINs), which learn an inverse mapping from scores to inputs.

432

Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks

Aiming to bridge this gap, in this paper, we prove generalization bounds for SGD under the assumption that its trajectories can be well-approximated by a \emph{Feller process}, which defines a rich class of Markov processes that include several recent SDE representations (both Brownian or heavy-tailed) as its special case.

433

Exact expressions for double descent and implicit regularization via surrogate random design

We provide the first exact non-asymptotic expressions for double descent of the minimum norm linear estimator.

434

Certifying Confidence via Randomized Smoothing

In this work, we propose a method to generate certified radii for the prediction confidence of the smoothed classifier.

435

Learning Physical Constraints with Neural Projections

We propose a new family of neural networks to predict the behaviors of physical systems by learning their underpinning constraints.

436

Robust Optimization for Fairness with Noisy Protected Groups

First, we study the consequences of naively relying on noisy protected group labels: we provide an upper bound on the fairness violations on the true groups G when the fairness criteria are satisfied on noisy groups ^G.

437

Noise-Contrastive Estimation for Multivariate Point Processes

We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective.

438

A Game-Theoretic Analysis of the Empirical Revenue Maximization Algorithm with Endogenous Sampling

We generalize the definition of an incentive-awareness measure proposed by Lavi et al (2019), to quantify the reduction of ERM’s outputted price due to a change of m>=1 out of N input samples, and provide specific convergence rates of this measure to zero as N goes to infinity for different types of input distributions.

439

Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning

In this paper, we analytically characterise the role of gates and active sub-networks in deep learning.

440

Multiscale Deep Equilibrium Models

We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ), suited to large-scale and highly hierarchical pattern recognition domains.

441

Sparse Graphical Memory for Robust Planning

We introduce Sparse Graphical Memory (SGM), a new data structure that stores states and feasible transitions in a sparse memory.

442

Second Order PAC-Bayesian Bounds for the Weighted Majority Vote

We present a novel analysis of the expected risk of weighted majority vote in multiclass classification.

443

Dirichlet Graph Variational Autoencoder

In this work, we present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.

444

Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns.

445

Counterfactual Vision-and-Language Navigation: Unravelling the Unseen

We propose a new learning strategy that learns both from observations and generated counterfactual environments.

446

Robust Quantization: One Model to Rule Them All

To address this issue, we propose a method that provides intrinsic robustness to the model against a broad range of quantization processes.

447

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

In this work, we propose a first-order dual SDP algorithm that provides (1) any-time bounds (2) requires memory only linear in the total number of network activations and (3) has per-iteration complexity that scales linearly with the complexity of a forward/backward pass through the network.

448

Federated Accelerated Stochastic Gradient Descent

We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a principled acceleration of Federated Averaging (FedAvg, also known as Local SGD) for distributed optimization.

449

Robust Density Estimation under Besov IPM Losses

We study minimax convergence rates of nonparametric density estimation under the Huber contamination model, in which a “contaminated” proportion of the data comes from an unknown outlier distribution.

450

An analytic theory of shallow networks dynamics for hinge loss classification

In this paper we study in detail the training dynamics of a simple type of neural network: a single hidden layer trained to perform a classification task.

451

Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm

We study the fixed-support Wasserstein barycenter problem (FS-WBP), which consists in computing the Wasserstein barycenter of $m$ discrete probability measures supported on a finite metric space of size $n$.

452

Learning to Orient Surfaces by Self-supervised Spherical CNNs

In this work, we show the feasibility of learning a robust canonical orientation for surfaces represented as point clouds.

453

Adam with Bandit Sampling for Deep Learning

In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model’s convergence.

454

Parabolic Approximation Line Search for DNNs

Exploiting this parabolic property we introduce a simple and robust line search approach, which performs loss-shape dependent update steps.

455

Agnostic Learning of a Single Neuron with Gradient Descent

We consider the problem of learning the best-fitting single neuron as measured by the expected square loss $\E_{(x,y)\sim \mathcal{D}}[(\sigma(w^\top x)-y)^2]$ over some unknown joint distribution $\mathcal{D}$ by using gradient descent to minimize the empirical risk induced by a set of i.i.d. samples $S\sim \mathcal{D}^n$.

456

Statistical Efficiency of Thompson Sampling for Combinatorial Semi-Bandits

We propose to answer the above question for these two families by analyzing variants of the Combinatorial Thompson Sampling policy (CTS).

457

Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry

We consider the optimization problem associated with fitting two-layers ReLU networks with respect to the squared loss, where labels are generated by a target network.

458

Generative causal explanations of black-box classifiers

We develop a method for generating causal post-hoc explanations of black-box classifiers based on a learned low-dimensional representation of the data.

459

Sub-sampling for Efficient Non-Parametric Bandit Exploration

In this paper we propose the first multi-armed bandit algorithm based on re-sampling that achieves asymptotically optimal regret simultaneously for different families of arms (namely Bernoulli, Gaussian and Poisson distributions).

460

Learning under Model Misspecification: Applications to Variational and Ensemble methods

In this work, we present a novel analysis of the generalization performance of Bayesian model averaging under model misspecification and i.i.d. data using a new family of second-order PAC-Bayes bounds.

461

Language Through a Prism: A Spectral Approach for Multiscale Language Representations

We propose building models that isolate scale-specific information in deep representations, and develop methods for encouraging models during training to learn more about particular scales of interest.

462

DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles

We propose DVERGE, which isolates the adversarial vulnerability in each sub-model by distilling non-robust features, and diversifies the adversarial vulnerability to induce diverse outputs against a transfer attack.

463

Towards practical differentially private causal graph discovery

In this paper, we present a differentially private causal graph discovery algorithm, Priv-PC, which improves both utility and running time compared to the state-of-the-art.

464

Independent Policy Gradient Methods for Competitive Reinforcement Learning

We obtain global, non-asymptotic convergence guarantees for independent learning algorithms in competitive reinforcement learning settings with two agents (i.e., zero-sum stochastic games).

465

The Value Equivalence Principle for Model-Based Reinforcement Learning

In this paper we argue that the limited representational resources of model-based RL agents are better used to build models that are directly useful for value-based planning.

466

Structured Convolutions for Efficient Neural Network Design

In this work, we tackle model efficiency by exploiting redundancy in the implicit structure of the building blocks of convolutional neural networks.

467

Latent World Models For Intrinsically Motivated Exploration

In this work we consider partially observable environments with sparse rewards.

468

Estimating Rank-One Spikes from Heavy-Tailed Noise via Self-Avoiding Walks

In this work, we exhibit an estimator that works for heavy-tailed noise up to the BBP threshold that is optimal even for Gaussian noise.

469

Policy Improvement via Imitation of Multiple Oracles

In this paper, we propose the state-wise maximum of the oracle policies’ values as a natural baseline to resolve con?icting advice from multiple oracles.

470

Training Generative Adversarial Networks by Solving Ordinary Differential Equations

From this perspective, we hypothesise that instabilities in training GANs arise from the integration error in discretising the continuous dynamics.

471

Learning of Discrete Graphical Models with Neural Networks

We introduce NeurISE, a neural net based algorithm for graphical model learning, to tackle this limitation of GRISE.

472

RepPoints v2: Verification Meets Regression for Object Detection

In this paper, we take this philosophy to improve state-of-the-art object detection, specifically by RepPoints.

473

Unfolding the Alternating Optimization for Blind Super Resolution

Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model.

474

Entrywise convergence of iterative methods for eigenproblems

Here we address the convergence of subspace iteration when distances are measured in the ?2?? norm and provide deterministic bounds.

475

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views

To address this, we propose \textit{The Multi-View and Multi-Object Network (MulMON)}—a method for learning accurate, object-centric representations of multi-object scenes by leveraging multiple views.

476

A Catalyst Framework for Minimax Optimization

We introduce a generic \emph{two-loop} scheme for smooth minimax optimization with strongly-convex-concave objectives.

477

Self-supervised Co-Training for Video Representation Learning

The objective of this paper is visual-only self-supervised video representation learning.

478

Gradient Estimation with Stochastic Softmax Tricks

Working within the perturbation model framework, we introduce stochastic softmax tricks, which generalize the Gumbel-Softmax trick to combinatorial spaces.

479

Meta-Learning Requires Meta-Augmentation

We introduce an information-theoretic framework of meta-augmentation, whereby adding randomness discourages the base learner and model from learning trivial solutions that do not generalize to new tasks.

480

SLIP: Learning to predict in unknown dynamical systems with long-term memory

We present an efficient and practical (polynomial time) algorithm for online prediction in unknown and partially observed linear dynamical systems (LDS) under stochastic noise.

481

Improving GAN Training with Probability Ratio Clipping and Sample Reweighting

To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability.

482

Bayesian Bits: Unifying Quantization and Pruning

We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.

483

On Testing of Samplers

The primary contribution of this paper is an affirmative answer to the above challenge: motivated by Barbarik, but using different techniques and analysis, we design Barbarik2, an algorithm to test whether the distribution generated by a sampler is epsilon-close or eta-far from any target distribution.

484

Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective

This paper introduces a bespoke Gaussian process bandit optimization method for automatically choosing these points.

485

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

In this work, we present a simple and effective approach to compress large Transformer (Vaswani et al., 2017) based pre-trained models, termed as deep self-attention distillation.

486

Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization

In this paper, we bridge this gap by providinga sharp analysis of epoch-wise stochastic gradient descent ascent method (referredto as Epoch-GDA) for solving strongly convex strongly concave (SCSC) min-maxproblems, without imposing any additional assumption about smoothness or the function’s structure.

487

Woodbury Transformations for Deep Generative Flows

In this paper, we introduce Woodbury transformations, which achieve efficient invertibility via the Woodbury matrix identity and efficient determinant calculation via Sylvester’s determinant identity.

488

Graph Contrastive Learning with Augmentations

In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.

489

Gradient Surgery for Multi-Task Learning

In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients.

490

Bayesian Probabilistic Numerical Integration with Tree-Based Models

This paper proposes to tackle this issue with a new Bayesian numerical integration algorithm based on Bayesian Additive Regression Trees (BART) priors, which we call BART-Int.

491

Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel

We study the relationship between the training dynamics of nonlinear deep networks, the geometry of the loss landscape, and the time evolution of a data-dependent NTK.

492

Graph Meta Learning via Local Subgraphs

Here, we introduce G-Meta, a novel meta-learning algorithm for graphs.

493

Stochastic Deep Gaussian Processes over Graphs

In this paper we propose Stochastic Deep Gaussian Processes over Graphs (DGPG), which are deep structure models that learn the mappings between input and output signals in graph domains.

494

Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks

To infer causal relationships in zero-inflated count data, we propose a new zero-inflated Poisson Bayesian network (ZIPBN) model.

495

Evaluating Attribution for Graph Neural Networks

In this work we adapt commonly-used attribution methods for GNNs and quantitatively evaluate them using computable ground-truths that are objective and challenging to learn.

496

On Second Order Behaviour in Augmented Neural ODEs

In this work, we consider Second Order Neural ODEs (SONODEs).

497

Neuron Shapley: Discovering the Responsible Neurons

We introduce a new multi-armed bandit algorithm that is able to efficiently detect neurons with the largest Shapley value orders of magnitude faster than existing Shapley value approximation methods.

498

Stochastic Normalizing Flows

Here we propose a generalized and combined approach to sample target densities: Stochastic Normalizing Flows (SNF) – an arbitrary sequence of deterministic invertible functions and stochastic sampling blocks.

499

GPU-Accelerated Primal Learning for Extremely Fast Large-Scale Classification

In this work, we show that using judicious GPU-optimization principles, TRON training time for different losses and feature representations may be drastically reduced.

500

Random Reshuffling is Not Always Better

We give a counterexample to the Operator Inequality of Noncommutative Arithmetic and Geometric Means, a longstanding conjecture that relates to the performance of random reshuffling in learning algorithms (Recht and Ré, "Toward a noncommutative arithmetic-geometric mean inequality: conjectures, case-studies, and consequences," COLT 2012).

501

Model Agnostic Multilevel Explanations

In this paper, we propose a meta-method that, given a typical local explainability method, can build a multilevel explanation tree.

502

NeuMiss networks: differentiable programming for supervised learning with missing values.

In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random).

503

Revisiting Parameter Sharing for Automatic Neural Channel Number Search

In this paper, we aim at providing a better understanding and exploitation of parameter sharing for CNS.

504

Differentially-Private Federated Linear Bandits

In this paper, we study this in context of the contextual linear bandit: we consider a collection of agents cooperating to solve a common contextual bandit, while ensuring that their communication remains private.

505

Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning?

We solve this problem via a plug-in solver approach, which builds an empirical model and plans in this empirical model via an arbitrary plug-in solver.

506

Learning Physical Graph Representations from Visual Scenes

To overcome these limitations, we introduce the idea of “Physical Scene Graphs” (PSGs), which represent scenes as hierarchical graphs, with nodes in the hierarchy corresponding intuitively to object parts at different scales, and edges to physical connections between parts.

507

Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking

We propose a probabilistic graphical model built on top of deep neural networks, Deep Graph Pose (DGP), to leverage these useful spatial and temporal constraints, and develop an efficient structured variational approach to perform inference in this model.

508

Meta-learning from Tasks with Heterogeneous Attribute Spaces

We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training tasks given a few labeled instances.

509

Estimating decision tree learnability with polylogarithmic sample complexity

We show that top-down decision tree learning heuristics (such as ID3, C4.5, and CART) are amenable to highly efficient {\sl learnability estimation}: for monotone target functions, the error of the decision tree hypothesis constructed by these heuristics can be estimated with {\sl polylogarithmically} many labeled examples, exponentially smaller than the number necessary to run these heuristics, and indeed, exponentially smaller than information-theoretic minimum required to learn a good decision tree.

510

Sparse Symplectically Integrated Neural Networks

We introduce Sparse Symplectically Integrated Neural Networks (SSINNs), a novel model for learning Hamiltonian dynamical systems from data.

511

Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision

We propose Continuous Object Representation Networks (CORN), a conditional architecture that encodes an input image’s geometry and appearance that map to a 3D consistent scene representation.

512

Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

In this work, we propose a novel, efficient objective function that utilizes the Jensen-Shannon divergence for multiple distributions.

513

Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers

We target the problem of reducing numerical errors of iterative PDE solvers and compare different learning approaches for finding complex correction functions.

514

Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension

In this paper, we establish the first provably efficient RL algorithm with general value function approximation.

515

Predicting Training Time Without Training

We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.

516

How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions

We propose an interaction attribution and detection framework called Archipelago which addresses these problems and is also scalable in real-world settings.

517

Optimal Adaptive Electrode Selection to Maximize Simultaneously Recorded Neuron Yield

Here, we present an algorithm called classification-based selection (CBS) that optimizes the joint electrode selections for all recording channels so as to maximize isolation quality of detected neurons.

518

Neurosymbolic Reinforcement Learning with Formally Verified Exploration

We present REVEL, a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces.

519

Wavelet Flow: Fast Training of High Resolution Normalizing Flows

This paper introduces Wavelet Flow, a multi-scale, normalizing flow architecture based on wavelets.

520

Multi-task Batch Reinforcement Learning with Metric Learning

To robustify task inference, we propose a novel application of the triplet loss.

521

On 1/n neural representation and robustness

In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks.

522

Boundary thickness and robustness in learning models

In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness.

523

Demixed shared component analysis of neural population data from multiple brain areas

Here, inspired by a method developed for a single brain area, we introduce a new technique for demixing variables across multiple brain areas, called demixed shared component analysis (dSCA).

524

Learning Kernel Tests Without Data Splitting

Inspired by the selective inference framework, we propose an approach that enables learning the hyperparameters and testing on the full sample without data splitting.

525

Unsupervised Data Augmentation for Consistency Training

In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.

526

Subgroup-based Rank-1 Lattice Quasi-Monte Carlo

To address this issue, we propose a simple closed-form rank-1 lattice construction method based on group theory.

527

Minibatch vs Local SGD for Heterogeneous Distributed Learning

We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the heterogeneous distributed setting, where each machine has access to stochastic gradient estimates for a different, machine-specific, convex objective; the goal is to optimize w.r.t.~the average objective; and machines can only communicate intermittently.

528

Multi-task Causal Learning with Gaussian Processes

We propose the first multi-task causal Gaussian process (GP) model, which we call DAG-GP, that allows for information sharing across continuous interventions and across experiments on different variables.

529

Proximity Operator of the Matrix Perspective Function and its Applications

Through this connection, we propose a quadratically convergent Newton algorithm for the root finding problem.Experiments verify that the evaluation of the proximity operator requires at most 8 Newton steps, taking less than 5s for 2000 by 2000 matrices on a standard laptop.

530

Generative 3D Part Assembly via Dynamic Graph Learning

In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry.

531

Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention

As such, our work introduces a practical approach for bridging between data-driven and cognitive models and demonstrates a new way to integrate human gaze-guided neural attention into NLP tasks.

532

The Power of Comparisons for Actively Learning Linear Classifiers

While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples.

533

From Boltzmann Machines to Neural Networks and Back Again

In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models.

534

Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality

In this paper, we focus on the finite hypothesis case and ask if one can achieve the asymptotic optimality while enjoying bounded regret whenever possible.

535

Pruning neural networks without any data by iteratively conserving synaptic flow

This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design.

536

Detecting Interactions from Neural Networks via Topological Analysis

Motivated by the observation, in this paper, we propose to investigate the interaction detection problem from a novel topological perspective by analyzing the connectivity in neural networks.

537

Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems

In this work, we employ a probabilistic approach to safety evaluation in simulation, where we are concerned with computing the probability of dangerous events.

538

Interpretable and Personalized Apprenticeship Scheduling: Learning Interpretable Scheduling Policies from Heterogeneous User Demonstrations

We propose a personalized and interpretable apprenticeship scheduling algorithm that infers an interpretable representation of all human task demonstrators by extracting decision-making criteria via an inferred, personalized embedding non-parametric in the number of demonstrator types.

539

Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes

This paper proposes a continual online model-based reinforcement learning approach that does not require pre-training to solve task-agnostic problems with unknown task boundaries.

540

Benchmarking Deep Learning Interpretability in Time Series Predictions

In this paper, we set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures, including Recurrent Neural Network, Temporal Convolutional Networks, and Transformers in a new benchmark of synthetic time series data.

541

Federated Principal Component Analysis

We present a federated, asynchronous, and $(\varepsilon, \delta)$-differentially private algorithm for $\PCA$ in the memory-limited setting.

542

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist.

543

SMYRF – Efficient Attention using Asymmetric Clustering

We propose a novel type of balanced clustering algorithm to approximate attention.

544

Introducing Routing Uncertainty in Capsule Networks

Rather than performing inefficient local iterative routing between adjacent capsule layers, we propose an alternative global view based on representing the inherent uncertainty in part-object assignment.

545

A Simple and Efficient Smoothing Method for Faster Optimization and Local Exploration

This work proposes a novel smoothing method, called Bend, Mix and Release (BMR), that extends two well-known smooth approximations of the convex optimization literature: randomized smoothing and the Moreau envelope.

546

Hyperparameter Ensembles for Robustness and Uncertainty Quantification

In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings.

547

Neutralizing Self-Selection Bias in Sampling for Sortition

In order to still produce panels whose composition resembles that of the population, we develop a sampling algorithm that restores close-to-equal representation probabilities for all agents while satisfying meaningful demographic quotas.

548

On the Convergence of Smooth Regularized Approximate Value Iteration Schemes

In this work, we analyse these techniques from error propagation perspective using the approximate dynamic programming framework.

549

Off-Policy Evaluation via the Regularized Lagrangian

In this paper, we unify these estimators as regularized Lagrangians of the same linear program.

550

The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

To address this, we introduce an experimental setup to evaluate model-based behavior of RL methods, inspired by work from neuroscience on detecting model-based behavior in humans and animals.

551

Neural Power Units

We introduce the Neural Power Unit (NPU) that operates on the full domain of real numbers and is capable of learning arbitrary power functions in a single layer.

552

Towards Scalable Bayesian Learning of Causal DAGs

We present algorithmic techniques to signi?cantly reduce the space and time requirements, which make the use of substantially larger values of K feasible.

553

A Dictionary Approach to Domain-Invariant Learning in Deep Networks

In this paper, we consider domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN).

554

Bootstrapping neural processes

To this end, we propose the Bootstrapping Neural Process (BNP), a novel extension of the NP family using the bootstrap.

555

Large-Scale Adversarial Training for Vision-and-Language Representation Learning

We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning.

556

Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations

We consider ReLU networks with random weights, in which the dimension decreases at each layer.

557

Compositional Visual Generation with Energy Based Models

In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions.

558

Factor Graph Grammars

We propose the use of hyperedge replacement graph grammars for factor graphs, or actor graph grammars (FGGs) for short.

559

Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs

This work proposes an unsupervised learning framework for CO problems on graphs that can provide integral solutions of certified quality.

560

Autoregressive Score Matching

To increase flexibility, we propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariate log-conditionals (scores), which need not be normalized.

561

Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization

Here, we introduce a new technique for debiasing the local estimates, which leads to both theoretical and empirical improvements in the convergence rate of distributed second order methods.

562

Neural Controlled Differential Equations for Irregular Time Series

Here, we demonstrate how this may be resolved through the well-understood mathematics of \emph{controlled differential equations}.

563

On Efficiency in Hierarchical Reinforcement Learning

In this paper, we discuss the kind of structure in a Markov decision process which gives rise to efficient HRL methods.

564

On Correctness of Automatic Differentiation for Non-Differentiable Functions

This status quo raises a natural question: are autodiff systems correct in any formal sense when they are applied to such non-differentiable functions? In this paper, we provide a positive answer to this question.

565

Probabilistic Linear Solvers for Machine Learning

Unifying earlier work we propose a class of probabilistic linear solvers which jointly infer the matrix, its inverse and the solution from matrix-vector product observations.

566

Dynamic Regret of Policy Optimization in Non-Stationary Environments

We propose two model-free policy optimization algorithms, POWER and POWER++, and establish guarantees for their dynamic regret.

567

Multipole Graph Neural Operator for Parametric Partial Differential Equations

Inspired by the classical multipole methods, we purpose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.

568

BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images

We present BlockGAN, an image generative model that learns object-aware 3D scene representations directly from unlabelled 2D images.

569

Online Structured Meta-learning

We overcome this limitation by proposing an online structured meta-learning (OSML) framework.

570

Learning Strategic Network Emergence Games

We propose MINE (Multi-agent Inverse models of Network Emergence mechanism), a new learning framework that solves Markov-Perfect network emergence games using multi-agent inverse reinforcement learning.

571

Towards Interpretable Natural Language Understanding with Explanations as Latent Variables

In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.

572

The Mean-Squared Error of Double Q-Learning

In this paper, we establish a theoretical comparison between the asymptotic mean square errors of double Q-learning and Q-learning.

573

What Makes for Good Views for Contrastive Learning?

In this paper, we use theoretical and empirical analysis to better understand the importance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact.

574

Denoising Diffusion Probabilistic Models

We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics.

575

Barking up the right tree: an approach to search over molecule synthesis DAGs

We therefore propose a deep generative model that better represents the real world process, by directly outputting molecule synthesis DAGs.

576

On Uniform Convergence and Low-Norm Interpolation Learning

We consider an underdetermined noisy linear regression model where the minimum-norm interpolating predictor is known to be consistent, and ask: can uniform convergence in a norm ball, or at least (following Nagarajan and Kolter) the subset of a norm ball that the algorithm selects on a typical input set, explain this success?

577

Bandit Samplers for Training Graph Neural Networks

In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly.

578

Sampling from a k-DPP without looking at all items

In this paper, we develop alpha-DPP, an algorithm which adaptively builds a sufficiently large uniform sample of data that is then used to efficiently generate a smaller set of k items, while ensuring that this set is drawn exactly from the target distribution defined on all n items.

579

Uncovering the Topology of Time-Varying fMRI Data using Cubical Persistence

To address this challenge, we present a novel topological approach that encodes each time point in an fMRI data set as a persistence diagram of topological features, i.e. high-dimensional voids present in the data.

580

Hierarchical Poset Decoding for Compositional Generalization in Language

In this paper, we propose a novel hierarchical poset decoding paradigm for compositional generalization in language.

581

Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions

We introduce a parametric model called cooperative game abstractions (CGAs) for estimating characteristic functions from data.

582

Exchangeable Neural ODE for Set Modeling

In this work we propose a more general formulation to achieve permutation equivariance through ordinary differential equations (ODE).

583

Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Distributions

We show that for samples of discrete distributions, profile entropy is a fundamental measure unifying the concepts of estimation, inference, and compression.

584

CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection

In this paper, we present an end-to-end collaborative aggregation-and-distribution network (CoADNet) to capture both salient and repetitive visual patterns from multiple images.

585

Regularized linear autoencoders recover the principal components, eventually

We show that the inefficiency of learning the optimal representation is not inevitable — we present a simple modification to the gradient descent update that greatly speeds up convergence empirically.

586

Semi-Supervised Partial Label Learning via Confidence-Rated Margin Maximization

To circumvent this difficulty, the problem of semi-supervised partial label learning is investigated in this paper, where unlabeled data is utilized to facilitate model induction along with partial label training examples.

587

GramGAN: Deep 3D Texture Synthesis From 2D Exemplars

We present a novel texture synthesis framework, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image.

588

UWSOD: Toward Fully-Supervised-Level Capacity Weakly Supervised Object Detection

In this paper, we propose a unified WSOD framework, termed UWSOD, to develop a high-capacity general detection model with only image-level labels, which is self-contained and does not require external modules or additional supervision.

589

Learning Restricted Boltzmann Machines with Sparse Latent Variables

In this paper, we give an algorithm for learning general RBMs with time complexity $\tilde{O}(n^{2^s+1})$, where $s$ is the maximum number of latent variables connected to the MRF neighborhood of an observed variable.

590

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Focusing on a $\gamma$-discounted MDP with state space S and action space A, we demonstrate that the $ \ell_{\infty} $-based sample complexity of classical asynchronous Q-learning — namely, the number of samples needed to yield an entrywise $\epsilon$-accurate estimate of the Q-function — is at most on the order of $ \frac{1}{ \mu_{\min}(1-\gamma)^5 \epsilon^2 }+ \frac{ t_{\mathsf{mix}} }{ \mu_{\min}(1-\gamma) } $ up to some logarithmic factor, provided that a proper constant learning rate is adopted.

591

Curriculum learning for multilevel budgeted combinatorial problems

By framing them in a multi-agent reinforcement learning setting, we devise a value-based method to learn to solve multilevel budgeted combinatorial problems involving two players in a zero-sum game over a graph.

592

FedSplit: an algorithmic framework for fast federated optimization

In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure.

593

Estimation and Imputation in Probabilistic Principal Component Analysis with Missing Not At Random Data

We continue this line of research, but extend it to a more general MNAR mechanism, in a more general model of the probabilistic principal component analysis (PPCA), \textit{i.e.}, a low-rank model with random effects.

594

Correlation Robust Influence Maximization

We propose a distributionally robust model for the influence maximization problem.

595

Neuronal Gaussian Process Regression

Here I propose that the brain implements GP regression and present neural networks (NNs) for it.

596

Nonconvex Sparse Graph Learning under Laplacian Constrained Graphical Model

In this paper, we consider the problem of learning a sparse graph from the Laplacian constrained Gaussian graphical model.

597

Synthetic Data Generators — Sequential and Private

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

598

Uncertainty Quantification for Inferring Hawkes Networks

Aiming towards this, we develop a statistical inference framework to learn causal relationships between nodes from networked data, where the underlying directed graph implies Granger causality.

599

Implicit Distributional Reinforcement Learning

To improve the sample efficiency of policy-gradient based reinforcement learning algorithms, we propose implicit distributional actor-critic (IDAC) that consists of a distributional critic, built on two deep generator networks (DGNs), and a semi-implicit actor (SIA), powered by a flexible policy distribution.

600

Auxiliary Task Reweighting for Minimum-data Learning

In this work, we propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task.

601

Small Nash Equilibrium Certificates in Very Large Games

In this paper we introduce an approach that shows that it is possible to provide exploitability guarantees in such settings without ever exploring the entire game.

602

Training Linear Finite-State Machines

In this paper, we introduce a method that can train a multi-layer FSM-based network where FSMs are connected to every FSM in the previous and the next layer.

603

Efficient active learning of sparse halfspaces with arbitrary bounded noise

In this work, we substantially improve on it by designing a polynomial time algorithm for active learning of $s$-sparse halfspaces, with a label complexity of $\tilde{O}\big(\frac{s}{(1-2\eta)^4} polylog (d, \frac 1 \epsilon) \big)$.

604

Swapping Autoencoder for Deep Image Manipulation

We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation, rather than random sampling.

605

Self-Supervised Few-Shot Learning on Point Clouds

To combat this problem, we propose two novel self-supervised pre-training tasks that encode a hierarchical partitioning of the point clouds using a cover-tree, where point cloud subsets lie within balls of varying radii at each level of the cover-tree.

606

Faster Differentially Private Samplers via R?nyi Divergence Analysis of Discretized Langevin MCMC

In this work, we establish rapid convergence for these algorithms under distance measures more suitable for differential privacy.

607

Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

To address this problem, we propose a method that integrates key ingredients from latent models and traditional neural encoding models.

608

RL Unplugged: A Collection of Benchmarks for Offline Reinforcement Learning

In this paper, we propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.

609

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

Therefore in this paper, we aim to solve this problem by exploiting the divide-and-conquer paradigm.

610

Interior Point Solving for LP-based prediction+optimisation

Instead we investigate the use of the more principled logarithmic barrier term, as widely used in interior point solvers for linear programming.

611

A simple normative network approximates local non-Hebbian learning in the cortex

Mathematically, we start with a family of Reduced-Rank Regression (RRR) objective functions which include Reduced Rank (minimum) Mean Square Error (RRMSE) and Canonical Correlation Analysis (CCA), and derive novel offline and online optimization algorithms, which we call Bio-RRR.

612

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

Here we present a family of learning rules that does not suffer from any of these problems.

613

Understanding the Role of Training Regimes in Continual Learning

In this work, we depart from the typical approach of altering the learning algorithm to improve stability.

614

Fair regression with Wasserstein barycenters

We study the problem of learning a real-valued function that satisfies the Demographic Parity constraint.

615

Training Stronger Baselines for Learning to Optimize

As research efforts focus on increasingly sophisticated L2O models, we argue for an orthogonal, under-explored theme: improved training techniques for L2O models.

616

Exactly Computing the Local Lipschitz Constant of ReLU Networks

We present a sufficient condition for which backpropagation always returns an element of the generalized Jacobian, and reframe the problem over this broad class of functions.

617

Strictly Batch Imitation Learning by Energy-based Distribution Matching

To address this challenge, we propose a novel technique by energy-based distribution matching (EDM): By identifying parameterizations of the (discriminative) model of a policy with the (generative) energy function for state distributions, EDM yields a simple but effective solution that equivalently minimizes a divergence between the occupancy measure for the demonstrator and a model thereof for the imitator.

618

On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method

In this paper, we analyze several probabilistic properties of the randomized midpoint discretization method, considering both overdamped and underdamped Langevin dynamics.

619

A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems

In this paper, we introduce a “smoothing" scheme which can be combined with GDA to stabilize the oscillation and ensure convergence to a stationary solution.

620

Generating Correct Answers for Progressive Matrices Intelligence Tests

In this work, we focus, instead, on generating a correct answer given the grid, which is a harder task, by definition.

621

HyNet: Learning Local Descriptor with Hybrid Similarity Measure and Triplet Loss

In this paper, we investigate how L2 normalisation affects the back-propagated descriptor gradients during training.

622

Preference learning along multiple criteria: A game-theoretic perspective

In this work, we generalize the notion of a von Neumann winner to the multi-criteria setting by taking inspiration from Blackwell’s approachability.

623

Multi-Plane Program Induction with 3D Box Priors

Unlike prior work on image-based program synthesis, which assumes the image contains a single visible 2D plane, we present Box Program Induction (BPI), which infers a program-like scene representation that simultaneously models repeated structure on multiple 2D planes, the 3D position and orientation of the planes, and camera parameters, all from a single image.

624

Online Neural Connectivity Estimation with Noisy Group Testing

Here, we propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks.

625

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

Our proposed framework, Once-for-all Adversarial Training (OAT), is built on an innovative model-conditional training framework, with a controlling hyper-parameter as the input.

626

Implicit Neural Representations with Periodic Activation Functions

We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives.

627

Rotated Binary Neural Network

In this paper, for the first time, we explore the influence of angular bias on the quantization error and then introduce a Rotated Binary Neural Network (RBNN), which considers the angle alignment between the full-precision weight vector and its binarized version.

628

Community detection in sparse time-evolving graphs with a dynamical Bethe-Hessian

A fast spectral algorithm based on an extension of the Bethe-Hessian matrix is proposed, which benefits from the positive correlation in the class labels and in their temporal evolution and is designed to be applicable to any dynamical graph with a community structure.

629

Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness

This motivates us to study principled approaches to high-quality uncertainty estimation that require only a single deep neural network (DNN).

630

Adaptive Learning of Rank-One Models for Efficient Pairwise Sequence Alignment

In this work, we propose a new approach to pairwise alignment estimation based on two key new ingredients.

631

Hierarchical nucleation in deep neural networks

In this work we study the evolution of the probability density of the ImageNet dataset across the hidden layers in some state-of-the-art DCNs.

632

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.

633

Graph Geometry Interaction Learning

To utilize the strength of both Euclidean and hyperbolic geometries, we develop a novel Geometry Interaction Learning (GIL) method for graphs, a well-suited and efficient alternative for learning abundant geometric properties in graph.

634

Differentiable Augmentation for Data-Efficient GAN Training

To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples.

635

Heuristic Domain Adaptation

In this paper, we address the modeling of domain-invariant and domain-specific information from the heuristic search perspective.

636

Learning Certified Individually Fair Representations

In this work, we introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points.

637

Part-dependent Label Noise: Towards Instance-dependent Label Noise

Motivated by this human cognition, in this paper, we approximate the instance-dependent label noise by exploiting \textit{part-dependent} label noise.

638

Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization

Using insights from this analysis, we propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.

639

An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods

In this paper, we revisit and improve the convergence of policy gradient (PG), natural PG (NPG) methods, and their variance-reduced variants, under general smooth policy parametrizations.

640

Geometric Exploration for Online Control

We study the control of an \emph{unknown} linear dynamical system under general convex costs.

641

Automatic Curriculum Learning through Value Disagreement

Inspired by this, we propose setting up an automatic curriculum for goals that the agent needs to solve.

642

MRI Banding Removal via Adversarial Training

In this work, we propose the use of an adversarial loss that penalizes banding structures without requiring any human annotation.

643

The NetHack Learning Environment

Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack.

644

Language and Visual Entity Relationship Graph for Agent Navigation

To capture and utilize the relationships, we propose a novel Language and Visual Entity Relationship Graph for modelling the inter-modal relationships between text and vision, and the intra-modal relationships among visual entities.

645

ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Here, we present a novel framework for creating class specific FA maps through image-to-image translation.

646

Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks

We study the eigenvalue distributions of the Conjugate Kernel and Neural Tangent Kernel associated to multi-layer feedforward neural networks.

647

No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium

In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in n-player general-sum extensive-form games with perfect recall.

648

Estimating weighted areas under the ROC curve

The results justify learning algorithms which select score functions to maximize the empirical partial area under the curve (pAUC).

649

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

We revisit this paradigm in arguably the simplest non-trivial setup, and study the implicit bias of Stochastic Gradient Descent (SGD) in the context of Stochastic Convex Optimization.

650

Generalized Hindsight for Reinforcement Learning

To leverage this insight and efficiently reuse data, we present Generalized Hindsight: an approximate inverse reinforcement learning technique for relabeling behaviors with the right tasks.

651

Critic Regularized Regression

In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR).

652

Boosting Adversarial Training with Hypersphere Embedding

In this work, we advocate incorporating the hypersphere embedding (HE) mechanism into the AT procedure by regularizing the features onto compact manifolds, which constitutes a lightweight yet effective module to blend in the strength of representation learning.

653

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs

Motivated by this limitation, we identify a set of key designs—ego- and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations—that boost learning from the graph structure under heterophily.

654

Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows

In this work, we propose a novel type of normalizing flow driven by a differential deformation of the continuous-time Wiener process.

655

Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

In this work, we show how to achieve low regret for GMSSC in polynomial-time.

656

Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification

In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of controlled information loss allows for an asymptotically exact formulation of the IB, while keeping the INN’s generative capabilities intact.

657

Detecting Hands and Recognizing Physical Contact in the Wild

We propose a novel convolutional network based on Mask-RCNN that can jointly learn to localize hands and predict their physical contact to address this problem.

658

On the Theory of Transfer Learning: The Importance of Task Diversity

We provide new statistical guarantees for transfer learning via representation learning–when transfer is achieved by learning a feature representation shared across different tasks.

659

Finite-Time Analysis of Round-Robin Kullback-Leibler Upper Confidence Bounds for Optimal Adaptive Allocation with Multiple Plays and Markovian Rewards

We study an extension of the classic stochastic multi-armed bandit problem which involves multiple plays and Markovian rewards in the rested bandits setting.

660

Neural Star Domain as Primitive Representation

To solve this problem, we propose a novel primitive representation named neural star domain (NSD) that learns primitive shapes in the star domain.

661

Off-Policy Interval Estimation with Lipschitz Value Iteration

In this work, we propose a provably correct method for obtaining interval bounds for off-policy evaluation in a general continuous setting.

662

Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics

Here we accommodate continuous nonlinear dynamics and continuous actions, and impute sensory observations corrupted by unknown noise that is private to the animal.

663

Deep Statistical Solvers

This paper introduces Deep Statistical Solvers (DSS), a new class of trainable solvers for optimization problems, arising e.g., from system simulations.

664

Distributionally Robust Parametric Maximum Likelihood Estimation

To mitigate these issues, we propose a distributionally robust maximum likelihood estimator that minimizes the worst-case expected log-loss uniformly over a parametric Kullback-Leibler ball around a parametric nominal distribution.

665

Secretary and Online Matching Problems with Machine Learned Advice

In particular, we study the following online selection problems: (i) the classical secretary problem, (ii) online bipartite matching and (iii) the graphic matroid secretary problem.

666

Deep Transformation-Invariant Clustering

In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict transformations and performs clustering directly in image space.

667

Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree

In contrast, in this paper we study the overfitting solution that minimizes the L1-norm, which is known as Basis Pursuit (BP) in the compressed sensing literature.

668

Improving Generalization in Reinforcement Learning with Mixture Regularization

In this work, we introduce a simple approach, named mixreg, which trains agents on a mixture of observations from different training environments and imposes linearity constraints on the observation interpolations and the supervision (e.g. associated reward) interpolations.

669

Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework

The PDP distinguishes from existing methods by two novel techniques: first, we differentiate through Pontryagin’s Maximum Principle, and this allows to obtain the analytical derivative of a trajectory with respect to tunable parameters within an optimal control system, enabling end-to-end learning of dynamics, policies, or/and control objective functions; and second, we propose an auxiliary control system in the backward pass of the PDP framework, and the output of this auxiliary control system is the analytical derivative of the original system’s trajectory with respect to the parameters, which can be iteratively solved using standard control tools.

670

Learning from Aggregate Observations

In this paper, we extend MIL beyond binary classification to other problems such as multiclass classification and regression.

671

The Devil is in the Detail: A Framework for Macroscopic Prediction via Microscopic Models

In this paper, we propose a principled optimization framework for macroscopic prediction by fitting microscopic models based on conditional stochastic optimization.

672

Subgraph Neural Networks

Here, we introduce SubGNN, a subgraph neural network to learn disentangled subgraph representations.

673

Demystifying Orthogonal Monte Carlo and Beyond

In this paper we shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results.

674

Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms

In this paper, we provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.

675

A Scalable Approach for Privacy-Preserving Collaborative Machine Learning

We propose COPML, a fully-decentralized training framework that achieves scalability and privacy-protection simultaneously.

676

Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search

In this work, we propose Glow-TTS, a flow-based generative model for parallel TTS that does not require any external aligner.

677

Towards Learning Convolutions from Scratch

To find architectures with small description length, we propose beta-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected networks on CIFAR-10 (84.50%), CIFAR-100 (57.76%) and SVHN (93.84%) bridging the gap between fully-connected and convolutional networks.

678

Cycle-Contrast for Self-Supervised Video Representation Learning

We present Cycle-Contrastive Learning (CCL), a novel self-supervised method for learning video representation.

679

Posterior Re-calibration for Imbalanced Datasets

In order to deal with shift in the testing label distribution, which imbalance causes, we motivate the problem from the perspective of an optimal Bayes classifier and derive a prior rebalancing technique that can be solved through a KL-divergence based optimization.

680

Novelty Search in Representational Space for Sample Efficient Exploration

We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives.

681

Robust Reinforcement Learning via Adversarial training with Langevin Dynamics

Leveraging the powerful Stochastic Gradient Langevin Dynamics, we present a novel, scalable two-player RL algorithm, which is a sampling variant of the two-player policy gradient method.

682

Adversarial Blocking Bandits

We consider a general adversarial multi-armed blocking bandit setting where each played arm can be blocked (unavailable) for some time periods and the reward per arm is given at each time period adversarially without obeying any distribution.

683

Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice

In particular, we consider the \emph{multi-shop ski rental} (MSSR) problem, which is a generalization of the classical ski rental problem.

684

Multi-label Contrastive Predictive Coding

To overcome this limitation, we introduce a novel estimator based on a multi-label classification problem, where the critic needs to jointly identify \emph{multiple} positive samples at the same time.

685

Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud

We propose a local-to-global representation learning algorithm for 3D point cloud data, which is appropriate to handle various geometric transformations, especially rotation, without explicit data augmentation with respect to the transformations.

686

Learning Invariants through Soft Unification

We propose Unification Networks, an end-to-end differentiable neural network approach capable of lifting examples into invariants and using those invariants to solve a given task.

687

One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL

The key insight of this work is that learning diverse behaviors for accomplishing a task can directly lead to behavior that generalizes to varying environments, without needing to perform explicit perturbations during training.

688

Variational Bayesian Monte Carlo with Noisy Likelihoods

In this work, we extend VBMC to deal with noisy log-likelihood evaluations, such as those arising from simulation-based models.

689

Finite-Sample Analysis of Contractive Stochastic Approximation Using Smooth Convex Envelopes

In this paper, we consider an SA involving a contraction mapping with respect to an arbitrary norm, and show its finite-sample error bounds while using different stepsizes.

690

Self-Supervised Generative Adversarial Compression

In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods.

691

An efficient nonconvex reformulation of stagewise convex optimization problems

We develop a nonconvex reformulation designed to exploit this staged structure.

692

From Finite to Countable-Armed Bandits

We propose a fully adaptive online learning algorithm that achieves O(log n) distribution-dependent expected cumulative regret after any number of plays n, and show that this order of regret is best possible.

693

Adversarial Distributional Training for Robust Deep Learning

In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.

694

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

Building on this, we propose the Convolutional Neural Process (ConvNP), which endows Neural Processes (NPs) with translation equivariance and extends convolutional conditional NPs to allow for dependencies in the predictive distribution.

695

Theory-Inspired Path-Regularized Differential Network Architecture Search

In this work, we solve this problem by theoretically analyzing the effects of various types of operations, e.g. convolution, skip connection and zero operation, to the network optimization.

696

Conic Descent and its Application to Memory-efficient Optimization over Positive Semidefinite Matrices

We present an extension of the conditional gradient method to problems whose feasible sets are convex cones.

697

Learning the Geometry of Wave-Based Imaging

We propose a general physics-based deep learning architecture for wave-based imaging problems.

698

Greedy inference with structure-exploiting lazy maps

We propose a framework for solving high-dimensional Bayesian inference problems using \emph{structure-exploiting} low-dimensional transport maps or flows.

699

Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning

To this end, we propose Nimble, a DL execution engine that runs GPU tasks in parallel with minimal scheduling overhead.

700

Finding the Homology of Decision Boundaries with Active Learning

In this paper, we propose an active learning algorithm to recover the homology of decision boundaries.

701

Reinforced Molecular Optimization with Neighborhood-Controlled Grammars

Here, we propose MNCE-RL, a graph convolutional policy network for molecular optimization with molecular neighborhood-controlled embedding grammars through reinforcement learning.

702

Natural Policy Gradient Primal-Dual Method for Constrained Markov Decision Processes

Specifically, we propose a new Natural Policy Gradient Primal-Dual (NPG-PD) method for CMDPs which updates the primal variable via natural policy gradient ascent and the dual variable via projected sub-gradient descent.

703

Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability

In this paper, we revisit the problem of distribution-independently learning halfspaces under Massart noise with rate $\eta$.

704

Certified Defense to Image Transformations via Randomized Smoothing

We address this challenge by introducing three different defenses, each with a different guarantee (heuristic, distributional and individual) stemming from the method used to bound the interpolation error.

705

Estimation of Skill Distribution from a Tournament

In this paper, we study the problem of learning the skill distribution of a population of agents from observations of pairwise games in a tournament.

706

Reparameterizing Mirror Descent as Gradient Descent

We present a general framework for casting a mirror descent update as a gradient descent update on a different set of parameters.

707

General Control Functions for Causal Effect Estimation from IVs

To construct general control functions and estimate effects, we develop the general control function method (GCFN).

708

Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards

In this paper, we consider stochastic multi-armed bandits (MABs) with heavy-tailed rewards, whose p-th moment is bounded by a constant nu_p for 1<p<=2.

709

Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks

We propose the first algorithm for certifying the robustness of GCNs to topological attacks in the application of \emph{graph classification}.

710

Zero-Resource Knowledge-Grounded Dialogue Generation

To this end, we propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables, and devise a variational approach that can effectively estimate a generation model from independent dialogue corpora and knowledge corpora.

711

Targeted Adversarial Perturbations for Monocular Depth Prediction

We study the effect of adversarial perturbations on the task of monocular depth prediction.

712

Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties

We propose a novel Gaussian variational family that allows for retaining covariances between latent processes while achieving fast convergence by marginalising out all global latent variables.

713

Offline Imitation Learning with a Misspecified Simulator

In this work, we investigate policy learning in the condition of a few expert demonstrations and a simulator with misspecified dynamics.

714

Multi-Fidelity Bayesian Optimization via Deep Neural Networks

To address this issue, we propose Deep Neural Network Multi-Fidelity Bayesian Optimization (DNN-MFBO) that can flexibly capture all kinds of complicated relationships between the fidelities to improve the objective function estimation and hence the optimization performance.

715

PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals

In this work we propose PlanGAN, a model-based algorithm specifically designed for solving multi-goal tasks in environments with sparse rewards.

716

Bad Global Minima Exist and SGD Can Reach Them

We find that if we do not regularize \emph{explicitly}, then SGD can be easily made to converge to poorly-generalizing, high-complexity models: all it takes is to first train on a random labeling on the data, before switching to properly training with the correct labels.

717

Optimal Prediction of the Number of Unseen Species with Multiplicity

We completely resolve this problem by determining the limit of estimation to be $a \approx (\log n)/\mu$, with both lower and upper bounds matching up to constant factors.

718

Characterizing Optimal Mixed Policies: Where to Intervene and What to Observe

In this paper, we investigate several properties of the class of mixed policies and provide an efficient and effective characterization, including optimality and non-redundancy.

719

Factor Graph Neural Networks

We generalize the GNN into a factor graph neural network (FGNN) providing a simple way to incorporate dependencies among multiple variables.

720

A Closer Look at Accuracy vs. Robustness

With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy.

721

Curriculum Learning by Dynamic Instance Hardness

By analogy, in this paper, we study the dynamics of a deep neural network’s (DNN) performance on individual samples during its learning process.

722

Spin-Weighted Spherical CNNs

In this paper, we present a new type of spherical CNN that allows anisotropic filters in an efficient way, without ever leaving the spherical domain.

723

Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks

Our aim is to achieve the best of both worlds, and we do so by introducing a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which achieves improved systematic generalization on the task of learning to execute programs using control flow graphs.

724

AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference

In this paper, for fast and accurate secure neural network inference, we propose an automated layer-wise parameter selector, AutoPrivacy, that leverages deep reinforcement learning to automatically determine a set of HE parameters for each linear layer in a HPPNN.

725

Baxter Permutation Process

In this paper, a Bayesian nonparametric (BNP) model for Baxter permutations (BPs), termed BP process (BPP) is proposed and applied to relational data analysis.

726

Characterizing emergent representations in a space of candidate learning rules for deep networks

Here we present a continuous two-dimensional space of candidate learning rules, parameterized by levels of top-down feedback and Hebbian learning.

727

Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation

To improve the deployment of AutoML on tabular data, we propose FAST-DAD to distill arbitrarily-complex ensemble predictors into individual models like boosted trees, random forests, and deep networks.

728

Adaptive Probing Policies for Shortest Path Routing

Inspired by traffic routing applications, we consider the problem of finding the shortest path from a source $s$ to a destination $t$ in a graph, when the lengths of the edges are unknown.

729

Approximate Heavily-Constrained Learning with Lagrange Multiplier Models

Our proposal is to associate a feature vector with each constraint, and to learn a “multiplier model’’ that maps each such vector to the corresponding Lagrange multiplier.

730

Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs

In this paper, we consider \emph{infeasible} IPMs for the special case where the number of variables is much larger than the number of constraints (i.e., wide), or vice-versa (i.e., tall) by taking the dual.

731

Sliding Window Algorithms for k-Clustering Problems

In this work, we focus on $k$-clustering problems such as $k$-means and $k$-median.

732

AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning

Unlike existing methods, we propose an adaptive sharing approach, calledAdaShare, that decides what to share across which tasks to achieve the best recognition accuracy, while taking resource efficiency into account.

733

Approximate Cross-Validation for Structured Models

In the present work, we address (i) by extending ACV to CV schemes with dependence structure between the folds.

734

Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation

We introduce Exemplar VAEs, a family of generative models that bridge the gap between parametric and non-parametric, exemplar based generative models.

735

Debiased Contrastive Learning

Motivated by this observation, we develop a debiased contrastive objective that corrects for the sampling of same-label datapoints, even without knowledge of the true labels.

736

UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree

On the contrary, we propose a model that extracts a CSG parse tree without any supervision – UCSG-Net.

737

Generalized Boosting

In this work, we specifically focus on one form of aggregation – \emph{function composition}.

738

COT-GAN: Generating Sequential Data via Causal Optimal Transport

We introduce COT-GAN, an adversarial algorithm to train implicit generative models optimized for producing sequential data.

739

Impossibility Results for Grammar-Compressed Linear Algebra

In this paper we consider lossless compression schemes, and ask if we can run our computations on the compressed data as efficiently as if the original data was that small.

740

Understanding spiking networks through convex optimization

Here we turn these findings around and show that virtually all inhibition-dominated SNNs can be understood through the lens of convex optimization, with network connectivity, timescales, and firing thresholds being intricately linked to the parameters of underlying convex optimization problems.

741

Better Full-Matrix Regret via Parameter-Free Online Learning

We provide online convex optimization algorithms that guarantee improved full-matrix regret bounds.

742

Large-Scale Methods for Distributionally Robust Optimization

We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and $\chi^2$ divergence uncertainty sets.

743

Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring

To mitigate these problems, we present a novel Thompson-sampling-based algorithm, which enables us to exactly sample the target parameter from the posterior distribution.

744

Bandit Linear Control

We present a new and efficient algorithm that, for strongly convex and smooth costs, obtains regret that grows with the square root of the time horizon T.

745

Refactoring Policy for Compositional Generalizability using Self-Supervised Object Proposals

We propose a two-stage framework, which refactorizes a high-reward teacher policy into a generalizable student policy with strong inductive bias.

746

PEP: Parameter Ensembling by Perturbation

We introduce a new approach, Parameter Ensembling by Perturbation (PEP), that constructs an ensemble of parameter values as random perturbations of the optimal parameter set from training by a Gaussian with a single variance parameter.

747

Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View

In this paper, we take a step in this direction by providing the first asymptotically precise analysis of linear multiclass classification.

748

Adversarial Example Games

In this work, we provide a theoretical foundation for crafting transferable adversarial examples to entire hypothesis classes.

749

Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts

In particular, we propose a novel joint-training framework to train plain CNN by leveraging the gradients of the ResNet counterpart.

750

Provably Efficient Neural Estimation of Structural Equation Models: An Adversarial Approach

We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using the stochastic gradient descent.

751

Security Analysis of Safe and Seldonian Reinforcement Learning Algorithms

We introduce a new measure of security to quantify the susceptibility to perturbations in training data by creating an attacker model that represents a worst-case analysis, and show that a couple of Seldonian RL methods are extremely sensitive to even a few data corruptions.

752

Learning to Play Sequential Games versus Unknown Opponents

We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.

753

Further Analysis of Outlier Detection with Deep Generative Models

In this work, we present a possible explanation for this phenomenon, starting from the observation that a model’s typical set and high-density region may not conincide.

754

Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning

In this paper, we propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD).

755

Neural Networks Learning and Memorization with (almost) no Over-Parameterization

In this paper we prove that SGD on depth two neural networks can memorize samples, learn polynomials with bounded weights, and learn certain kernel spaces, with {\em near optimal} network size, sample complexity, and runtime.

756

Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits

We address the problem of zero-order optimization of a strongly convex function.

757

Towards a Combinatorial Characterization of Bounded-Memory Learning

In this paper we aim to develop combinatorial dimensions that characterize bounded memory learning.

758

Chaos, Extremism and Optimism: Volume Analysis of Learning in Games

We perform volume analysis of Multiplicative Weights Updates (MWU) and its optimistic variant (OMWU) in zero-sum as well as coordination games.

759

On Regret with Multiple Best Arms

Our goal is to design algorithms that can automatically adapt to the unknown hardness of the problem, i.e., the number of best arms.

760

Matrix Completion with Hierarchical Graph Side Information

We consider a matrix completion problem that exploits social or item similarity graphs as side information.

761

Is Long Horizon RL More Difficult Than Short Horizon RL?

Our analysis introduces two ideas: (i) the construction of an $\varepsilon$-net for near-optimal policies whose log-covering number scales only logarithmically with the planning horizon, and (ii) the Online Trajectory Synthesis algorithm, which adaptively evaluates all policies in a given policy class and enjoys a sample complexity that scales logarithmically with the cardinality of the given policy class.

762

Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond

To implement this scheme efficiently, we derive a novel adjoint method that propagates the minimal information needed to construct the gradient of the approximate marginal likelihood.

763

Adversarial Learning for Robust Deep Clustering

In this paper, we propose a robust deep clustering method based on adversarial learning.

764

Learning Mutational Semantics

We propose an unsupervised solution based on language models that simultaneously learn continuous latent representations.

765

Learning to Learn Variational Semantic Memory

In this paper, we introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.

766

Myersonian Regression

Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression.

767

Learnability with Indirect Supervision Signals

In this paper, we develop a unified theoretical framework for multi-class classification when the supervision is provided by a variable that contains nonzero mutual information with the gold label.

768

Towards Safe Policy Improvement for Non-Stationary MDPs

We take the first steps towards ensuring safety, with high confidence, for smoothly-varying non-stationary decision problems.

769

Finer Metagenomic Reconstruction via Biodiversity Optimization

Here, we leverage a recently developed notion of biological diversity that simultaneously accounts for organism similarities and retains the optimization strategy underlying compressive-sensing-based approaches.

770

Causal Discovery in Physical Systems from Videos

In particular, our goal is to discover the structural dependencies among environmental and object variables: inferring the type and strength of interactions that have a causal effect on the behavior of the dynamical system.

771

Glyph: Fast and Accurately Training Deep Neural Networks on Encrypted Data

In this paper, we propose, Glyph, an FHE-based technique to fast and accurately train DNNs on encrypted data by switching between TFHE (Fast Fully Homomorphic Encryption over the Torus) and BGV cryptosystems.

772

Smoothed Analysis of Online and Differentially Private Learning

In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature.

773

Self-Paced Deep Reinforcement Learning

In this paper, we propose an answer by interpreting the curriculum generation as an inference problem, where distributions over tasks are progressively learned to approach the target task.

774

Kalman Filtering Attention for User Behavior Modeling in CTR Prediction

To tackle the two limitations, we propose a novel attention mechanism, termed Kalman Filtering Attention (KFAtt), that considers the weighted pooling in attention as a maximum a posteriori (MAP) estimation.

775

Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples

We address this shortcoming by proposing a novel loss function for DPN to maximize the representation gap between in-domain and OOD examples.

776

Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying Kernels

In this paper, we propose a non-template-specific fully convolutional mesh autoencoder for arbitrary registered mesh data.

777

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Here, we develop GNNGuard, a general defense approach against a variety of training-time attacks that perturb the discrete graph structure.

778

Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction

We propose Geo-PIFu, a method to recover a 3D mesh from a monocular color image of a clothed person.

779

Optimal visual search based on a model of target detectability in natural images

We present a novel approach for approximating the foveated detectability of a known target in natural backgrounds based on biological aspects of human visual system.

780

Towards Convergence Rate Analysis of Random Forests for Classification

We present the first finite-sample rate O(n^{-1/(8d+2)}) on the convergence of pure random forests for classification, which can be improved to be of O(n^{-1/(3.87d+2)}) by considering the midpoint splitting mechanism.

781

List-Decodable Mean Estimation via Iterative Multi-Filtering

We study the problem of {\em list-decodable mean estimation} for bounded covariance distributions.

782

Exact Recovery of Mangled Clusters with Same-Cluster Queries

We study the cluster recovery problem in the semi-supervised active clustering framework.

783

Steady State Analysis of Episodic Reinforcement Learning

In this paper we proved that unique steady-state distributions pervasively exist in the learning environment of episodic learning tasks, and that the marginal distributions of the system state indeed approach to the steady state in essentially all episodic tasks.

784

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment (DFA) to neural view synthesis, recommender systems, geometric learning, and natural language processing.

785

Bayesian Optimization for Iterative Learning

In this paper, we present a Bayesian optimization(BO) approach which exploits the iterative structure of learning algorithms for efficient hyperparameter tuning.

786

Minimax Bounds for Generalized Linear Models

We establish a new class of minimax prediction error bounds for generalized linear models.

787

Projection Robust Wasserstein Distance and Riemannian Optimization

Our contribution in this paper is to revisit the original motivation behind WPP/PRW, but take the hard route of showing that, despite its non-convexity and lack of nonsmoothness, and even despite some hardness results proved by~\citet{Niles-2019-Estimation} in a minimax sense, the original formulation for PRW/WPP \textit{can} be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than its convex relaxation.

788

CoinDICE: Off-Policy Confidence Interval Estimation

By applying the generalized empirical likelihood method to the resulting Lagrangian, we propose CoinDICE, a novel and efficient algorithm for computing confidence intervals.

789

Simple and Fast Algorithm for Binary Integer and Online Linear Programming

In this paper, we develop a simple and fast online algorithm for solving a class of binary integer linear programs (LPs) arisen in the general resource allocation problem.

790

Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction

To learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of {\em Maximal Coding Rate Reduction} ($\text{MCR}^2$), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class.

791

Learning Rich Rankings

In this work, we contribute a contextual repeated selection (CRS) model that leverages recent advances in choice modeling to bring a natural multimodality and richness to the rankings space.

792

Color Visual Illusions: A Statistics-based Computational Model

Given this tool, we present a model that supports the approach and explains lightness and color visual illusions in a unified manner.

793

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever.

794

Universal guarantees for decision tree induction via a higher-order splitting criterion

We propose a simple extension of {\sl top-down decision tree learning heuristics} such as ID3, C4.5, and CART.

795

Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation

In light of this gap, we develop a novel theoretical framework for attribute obfuscation.

796

A Boolean Task Algebra for Reinforcement Learning

In this work we formalise the logical composition of tasks as a Boolean algebra.

797

Learning with Differentiable Pertubed Optimizers

In order to expand the scope of learning problems that can be solved in an end-to-end fashion, we propose a systematic method to transform optimizers into operations that are differentiable and never locally constant.

798

Optimal Learning from Verified Training Data

To tackle this problem, we present a Stackelberg competition model for least squares regression, in which data is provided by agents who wish to achieve specific predictions for their data.

799

Online Linear Optimization with Many Hints

We study an online linear optimization (OLO) problem in which the learner is provided access to $K$ “hint” vectors in each round prior to making a decision.

800

Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification

We apply dynamical mean-field theory from statistical physics to track the dynamics of the algorithm in the high-dimensional limit via a self-consistent stochastic process.

801

Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning

In this paper, we investigate the task of structural learning in non-Markovian systems (i.e., when latent variables affect more than one observable) from a combination of observational and soft experimental data when the interventional targets are unknown.

802

Exploiting the Surrogate Gap in Online Multiclass Classification

We present \textproc{Gaptron}, a randomized first-order algorithm for online multiclass classification.

803

The Pitfalls of Simplicity Bias in Neural Networks

We attempt to reconcile SB and the superior standard generalization of neural networks with the non-robustness observed in practice by introducing piecewise-linear and image-based datasets, which (a) incorporate a precise notion of simplicity, (b) comprise multiple predictive features with varying levels of simplicity, and (c) capture the non-robustness of neural networks trained on real data.

804

Automatically Learning Compact Quality-aware Surrogates for Optimization Problems

To address these shortcomings, we learn a low-dimensional surrogate model of a large optimization problem by representing the feasible space in terms of meta-variables, each of which is a linear combination of the original variables.

805

Empirical Likelihood for Contextual Bandits

We propose an estimator and confidence interval for computing the value of a policy from off-policy data in the contextual bandit setting.

806

Can Q-Learning with Graph Networks Learn a Generalizable Branching Heuristic for a SAT Solver?

We present Graph-Q-SAT, a branching heuristic for a Boolean SAT solver trained with value-based reinforcement learning (RL) using Graph Neural Networks for function approximation.

807

Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

We therefore introduce GPFADS (Gaussian Process Factor Analysis with Dynamical Structure), which models single-trial neural population activity using low-dimensional, non-reversible latent processes.

808

Listening to Sounds of Silence for Speech Denoising

We introduce a deep learning model for speech denoising, a long-standing challenge in audio analysis arising in numerous applications.

809

BoxE: A Box Embedding Model for Knowledge Base Completion

Here, we propose a spatio-translational embedding model, called BoxE, that simultaneously addresses all these limitations.

810

Coherent Hierarchical Multi-Label Classification Networks

In this paper, we propose C-HMCNN(h), a novel approach for HMC problems, which, given a network h for the underlying multi-label classification problem, exploits the hierarchy information in order to produce predictions coherent with the constraint and improve performance.

811

Walsh-Hadamard Variational Inference for Bayesian Deep Learning

Inspired by the literature on kernel methods, and in particular on structured approximations of distributions of random matrices, this paper proposes Walsh-Hadamard Variational Inference (WHVI), which uses Walsh-Hadamardbased factorization strategies to reduce the parameterization and accelerate computations, thus avoiding over-regularization issues with the variational objective.

812

Federated Bayesian Optimization via Thompson Sampling

This paper presents federated Thompson sampling (FTS) which overcomes a number of key challenges of FBO and FL in a principled way: We (a) use random Fourier features to approximate the Gaussian process surrogate model used in BO, which naturally produces the parameters to be exchanged between agents, (b) design FTS based on Thompson sampling, which significantly reduces the number of parameters to be exchanged, and (c) provide a theoretical convergence guarantee that is robust against heterogeneous agents, which is a major challenge in FL and FBO.

813

MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation

We propose the multiON task, which requires navigation to an episode-specific sequence of objects in a realistic environment.

814

Neural Complexity Measures

We propose Neural Complexity (NC), a meta-learning framework for predicting generalization.

815

Optimal Iterative Sketching Methods with the Subsampled Randomized Hadamard Transform

Our technical contributions include a novel formula for the second moment of the inverse of projected matrices.

816

Provably adaptive reinforcement learning in metric spaces

We provide a refined analysis of the algorithm of Sinclair, Banerjee, and Yu (2019) and show that its regret scales with the zooming dimension of the instance.

817

ShapeFlow: Learnable Deformation Flows Among 3D Shapes

We present ShapeFlow, a flow-based model for learning a deformation space for entire classes of 3D shapes with large intra-class variations.

818

Self-Supervised Learning by Cross-Modal Audio-Video Clustering

Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video).

819

Optimal Query Complexity of Secure Stochastic Convex Optimization

We study the \emph{secure} stochastic convex optimization problem: a learner aims to learn the optimal point of a convex function through sequentially querying a (stochastic) gradient oracle, in the meantime, there exists an adversary who aims to free-ride and infer the learning outcome of the learner from observing the learner’s queries.

820

DynaBERT: Dynamic BERT with Adaptive Width and Depth

In this paper, we propose a novel dynamic BERT model (abbreviated as DynaBERT), which can flexibly adjust the size and latency by selecting adaptive width and depth.

821

Generalization Bound of Gradient Descent for Non-Convex Metric Learning

In this paper, we theoretically address this question and prove the agnostic Probably Approximately Correct (PAC) learnability for metric learning algorithms with non-convex objective functions optimized via gradient descent (GD); in particular, our theoretical guarantee takes the iteration number into account.

822

Dynamic Submodular Maximization

In this paper, we propose the first dynamic algorithm for this problem.

823

Inference for Batched Bandits

In this work, we develop methods for inference on data collected in batches using a bandit algorithm.

824

Approximate Cross-Validation with Low-Rank Data in High Dimensions

Guided by this observation, we develop a new algorithm for ACV that is fast and accurate in the presence of ALR data.

825

GANSpace: Discovering Interpretable GAN Controls

This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day.

826

Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization

We derive a novel formulation of q-Expected Hypervolume Improvement (qEHVI), an acquisition function that extends EHVI to the parallel, constrained evaluation setting.

827

Neuron-level Structured Pruning using Polarization Regularizer

To achieve this goal, we propose a new regularizer on scaling factors, namely polarization regularizer.

828

Limits on Testing Structural Changes in Ising Models

We present novel information-theoretic limits on detecting sparse changes in Isingmodels, a problem that arises in many applications where network changes canoccur due to some external stimuli.

829

Field-wise Learning for Multi-field Categorical Data

We propose a new method for learning with multi-field categorical data.

830

Continual Learning in Low-rank Orthogonal Subspaces

We propose to learn tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference.

831

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.

832

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

In this work, we study the proposal, by Steinke and Zakynthinou (2020), to reason about the generalization error of a learning algorithm by introducing a super sample that contains the training sample as a random subset and computing mutual information conditional on the super sample.

833

Learning Deformable Tetrahedral Meshes for 3D Reconstruction

We introduce \emph{Deformable Tetrahedral Meshes} (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.

834

Information theoretic limits of learning a sparse rule

We consider generalized linear models in regimes where the number of nonzero components of the signal and accessible data points are sublinear with respect to the size of the signal.

835

Self-supervised learning through the eyes of a child

In this paper, our goal is precisely to achieve such progress by utilizing modern self-supervised deep learning methods and a recent longitudinal, egocentric video dataset recorded from the perspective of three young children (Sullivan et al., 2020).

836

Unsupervised Semantic Aggregation and Deformable Template Matching for Semi-Supervised Learning

In this paper, we combine both to propose an Unsupervised Semantic Aggregation and Deformable Template Matching (USADTM) framework for SSL, which strives to improve the classification performance with few labeled data and then reduce the cost in data annotating.

837

A game-theoretic analysis of networked system control for common-pool resource management using multi-agent reinforcement learning

However, instead of focusing on biologically evolved human-like agents, our concern is rather to better understand the learning and operating behaviour of engineered networked systems comprising general-purpose reinforcement learning agents, subject only to nonbiological constraints such as memory, computation and communication bandwidth.

838

What shapes feature representations? Exploring datasets, architectures, and training

We study these questions using synthetic datasets in which the task-relevance of input features can be controlled directly.

839

Optimal Best-arm Identification in Linear Bandits

We study the problem of best-arm identification with fixed confidence in stochastic linear bandits.

840

Data Diversification: A Simple Strategy For Neural Machine Translation

We introduce Data Diversification: a simple but effective strategy to boost neural machine translation (NMT) performance.

841

Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding

In this work, based on the relational paths, which are composed of a sequence of triplets, we define the Interstellar as a recurrent neural architecture search problem for the short-term and long-term information along the paths.

842

CoSE: Compositional Stroke Embeddings

We present a generative model for stroke-based drawing tasks which is able to model complex free-form structures.

843

Learning Multi-Agent Coordination for Enhancing Target Coverage in Directional Sensor Networks

To realize this, we propose a Hierarchical Target-oriented Multi-Agent Coordination (HiT-MAC), which decomposes the target coverage problem into two-level tasks: targets assignment by a coordinator and tracking assigned targets by executors.

844

Biological credit assignment through dynamic inversion of feedforward networks

Overall, our work introduces an alternative perspective on credit assignment in the brain, and proposes a special role for temporal dynamics and feedback control during learning.

845

Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching

In this paper, we propose a two-stage learning framework to perform self-supervised class-aware sounding object localization.

846

Learning Multi-Agent Communication through Structured Attentive Reasoning

By developing an explicit architecture that is targeted towards communication, our work aims to open new directions to overcome important challenges in multi-agent cooperation through learned communication.

847

Private Identity Testing for High-Dimensional Distributions

In this work we present novel differentially private identity (goodness-of-fit) testers for natural and widely studied classes of multivariate product distributions: Gaussians in R^d with known covariance and product distributions over {\pm 1}^d.

848

On the Optimal Weighted $\ell_2$ Regularization in Overparameterized Linear Regression

We consider the linear model $\vy=\vX\vbeta_{\star}+\vepsilon$ with $\vX\in \mathbb{R}^{n\times p}$ in the overparameterized regime $p>n$.

849

An Efficient Asynchronous Method for Integrating Evolutionary and Gradient-based Policy Search

In this paper, we introduce an Asynchronous Evolution Strategy-Reinforcement Learning (AES-RL) that maximizes the parallel efficiency of ES and integrates it with policy gradient methods.

850

MetaSDF: Meta-Learning Signed Distance Functions

Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.

851

Simple and Scalable Sparse k-means Clustering via Feature Ranking

In this paper, we propose a novel framework for sparse k-means clustering that is intuitive, simple to implement, and competitive with state-of-the-art algorithms.

852

Model-based Adversarial Meta-Reinforcement Learning

We propose a minimax objective and optimize it by alternating between learning the dynamics model on a fixed task and finding the \textit{adversarial} task for the current model — the task for which the policy induced by the model is maximally suboptimal.

853

Graph Policy Network for Transferable Active Learning on Graphs

In this paper, we study active learning for GNNs, i.e., how to efficiently label the nodes on a graph to reduce the annotation cost of training GNNs.

854

Towards a Better Global Loss Landscape of GANs

In this work, we perform a global landscape analysis of the empirical loss of GANs.

855

Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

We propose two weighting schemes and prove that they recover the correct maximal action for any joint action $Q$-values, and therefore for $Q^*$ as well.

856

BanditPAM: Almost Linear Time k-Medoids Clustering via Multi-Armed Bandits

We propose BanditPAM, a randomized algorithm inspired by techniques from multi-armed bandits, that reduces the complexity of each PAM iteration from O(n^2) to O(nlogn) and returns the same results with high probability, under assumptions on the data that often hold in practice.

857

UDH: Universal Deep Hiding for Steganography, Watermarking, and Light Field Messaging

Exploiting its property of being \emph{universal}, we propose universal watermarking as a timely solution to address the concern of the exponentially increasing amount of images/videos.

858

Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders

We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder, while preserving its learned multimodality.

859

An Unbiased Risk Estimator for Learning with Augmented Classes

In this paper we show that, by using unlabeled training data to approximate the potential distribution of augmented classes, an unbiased risk estimator of the testing distribution can be established for the LAC problem under mild assumptions, which paves a way to develop a sound approach with theoretical guarantees.

860

AutoBSS: An Efficient Algorithm for Block Stacking Style Search

The proposed method, AutoBSS, is a novel AutoML algorithm based on Bayesian optimization by iteratively refining and clustering Block Stacking Style Code (BSSC), which can find optimal BSS in a few trials without biased evaluation.

861

Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point

In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware.

862

Stochastic Optimization with Laggard Data Pipelines

We provide the first convergence analyses of "data-echoed" extensions of common optimization methods, showing that they exhibit provable improvements over their synchronous counterparts.

863

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs

In this paper, to learn graph neural networks on heterogeneous graphs we propose a novel self-supervised auxiliary learning method using meta paths, which are composite relations of multiple edge types.

864

GPS-Net: Graph-based Photometric Stereo Network

In this paper, we present a Graph-based Photometric Stereo Network, which unifies per-pixel and all-pixel processings to explore both inter-image and intra-image information.

865

Consistent Structural Relation Learning for Zero-Shot Segmentation

In this work, we propose a Consistent Structural Relation Learning (CSRL) approach to constrain the generating of unseen visual features by exploiting the structural relations between seen and unseen categories.

866

Model Selection in Contextual Stochastic Bandit Problems

We study bandit model selection in stochastic environments.

867

Truncated Linear Regression in High Dimensions

In order to deal with both truncation and high-dimensionality at the same time, we develop new techniques that not only generalize the existing ones but we believe are of independent interest.

868

Incorporating Pragmatic Reasoning Communication into Emergent Language

Given that their combination has been explored in linguistics, in this work, we combine computational models of short-term mutual reasoning-based pragmatics with long-term language emergentism.

869

Deep Subspace Clustering with Data Augmentation

We propose a technique to exploit the benefits of data augmentation in DSC algorithms.

870

An Empirical Process Approach to the Union Bound: Practical Algorithms for Combinatorial and Linear Bandits

This paper proposes near-optimal algorithms for the pure-exploration linear bandit problem in the fixed confidence and fixed budget settings.

871

Can Graph Neural Networks Count Substructures?

Inspired by this, we propose to study the expressive power of graph neural networks (GNNs) via their ability to count attributed graph substructures, extending recent works that examine their power in graph isomorphism testing and function approximation.

872

A Bayesian Perspective on Training Speed and Model Selection

We take a Bayesian perspective to illustrate a connection between training speed and the marginal likelihood in linear models.

873

On the Modularity of Hypernetworks

In this paper, we define the property of modularity as the ability to effectively learn a different function for each input instance $I$.

874

Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies

To circumvent this issue, we propose several new doubly robust estimators based on different kernelization approaches.

875

Provably Efficient Neural GTD for Off-Policy Learning

This paper studies a gradient temporal difference (GTD) algorithm using neural network (NN) function approximators to minimize the mean squared Bellman error (MSBE).

876

Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration

In this paper we propose \modelshort, a new algorithm for learning conditional and unconditional EBMs for discrete structured data, where parameter gradients are estimated using a learned sampler that mimics local search.

877

Stable and expressive recurrent vision models

Here, we develop a new learning algorithm, "contractor recurrent back-propagation" (C-RBP), which alleviates these issues by achieving constant O(1) memory-complexity with steps of recurrent processing.

878

Entropic Optimal Transport between Unbalanced Gaussian Measures has a Closed Form

In this paper, we propose to fill the void at the intersection between these two schools of thought in OT by proving that the entropy-regularized optimal transport problem between two Gaussian measures admits a closed form.

879

BRP-NAS: Prediction-based NAS using GCNs

To address this problem, we propose BRP-NAS, an efficient hardware-aware NAS enabled by an accurate performance predictor-based on graph convolutional network (GCN).

880

Deep Shells: Unsupervised Shape Correspondence with Optimal Transport

We propose a novel unsupervised learning approach to 3D shape correspondence that builds a multiscale matching pipeline into a deep neural network.

881

ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding

In this paper, we formulate neural architecture search as a sparse coding problem.

882

Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D

In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D.

883

Regularizing Black-box Models for Improved Interpretability

Our method, ExpO, is a hybridization of these approaches that regularizes a model for explanation quality at training time.

884

Trust the Model When It Is Confident: Masked Model-based Actor-Critic

In this work, we find that better model usage can make a huge difference.

885

Semi-Supervised Neural Architecture Search

In this paper, we propose SemiNAS, a semi-supervised NAS approach that leverages numerous unlabeled architectures (without evaluation and thus nearly no cost).

886

Consistency Regularization for Certified Robustness of Smoothed Classifiers

We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.

887

Robust Multi-Agent Reinforcement Learning with Model Uncertainty

In this work, we study the problem of multi-agent reinforcement learning (MARL) with model uncertainty, which is referred to as robust MARL.

888

SIRI: Spatial Relation Induced Network For Spatial Description Resolution

Mimicking humans, who sequentially traverse spatial relationship words and objects with a first-person view to locate their target, we propose a novel spatial relationship induced (SIRI) network.

889

Adaptive Shrinkage Estimation for Streaming Graphs

In this work, we consider the fundamental problem of estimating the higher-order dependencies using adaptive sampling.

890

Make One-Shot Video Object Segmentation Efficient Again

To mitigate the inefficiencies of previous fine-tuning approaches, we present efficient One-Shot Video Object Segmentation (e-OSVOS).

891

Depth Uncertainty in Neural Networks

Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks.

892

Non-Euclidean Universal Approximation

We present general conditions describing feature and readout maps that preserve an architecture’s ability to approximate any continuous functions uniformly on compacts.

893

Constraining Variational Inference with Geometric Jensen-Shannon Divergence

We present a regularisation mechanism based on the {\em skew-geometric Jensen-Shannon divergence} $\left(\textrm{JS}^{\textrm{G}_{\alpha}}\right)$.

894

Gibbs Sampling with People

We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as ‘Gibbs Sampling with People’ (GSP).

895

HM-ANN: Efficient Billion-Point Nearest Neighbor Search on Heterogeneous Memory

In this work, we present a novel graph-based similarity search algorithm called HM-ANN, which takes both memory and data heterogeneity into consideration and enables billion-scale similarity search on a single node without using compression.

896

FrugalML: How to use ML Prediction APIs more accurately and cheaply

We take a first step towards addressing this challenge by proposing FrugalML, a principled framework that jointly learns the strength and weakness of each API on different data, and performs an efficient optimization to automatically identify the best sequential strategy to adaptively use the available APIs within a budget constraint.

897

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

We prove dimension free representation results for neural networks with D ReLU layers under square loss for a class of functions G_D defined in the paper.

898

Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning

We propose a general method for efficient exploration by sharing experience amongst agents.

899

Monotone operator equilibrium networks

In this paper, we develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ).

900

When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and Policy Assessment using Compartmental Gaussian Processes

To this end, this paper develops a Bayesian model for predicting the effects of COVID-19 containment policies in a global context — we treat each country as a distinct data point, and exploit variations of policies across countries to learn country-specific policy effects.

901

Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control

We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control.

902

High-Dimensional Sparse Linear Bandits

We derive a novel O(n^{2/3}) dimension-free minimax regret lower bound for sparse linear bandits in the data-poor regime where the horizon is larger than the ambient dimension and where the feature vectors admit a well-conditioned exploration distribution.

903

Non-Stochastic Control with Bandit Feedback

To overcome this issue, we propose an efficient algorithm for the general setting of bandit convex optimization for loss functions with memory, which may be of independent interest.

904

Generalized Leverage Score Sampling for Neural Networks

In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels.

905

An Optimal Elimination Algorithm for Learning a Best Arm

In this paper we propose a new approach for $(\epsilon,\delta)$-\texttt{PAC} learning a best arm.

906

Efficient Projection-free Algorithms for Saddle Point Problems

In this paper, we study projection-free algorithms for convex-strongly-concave saddle point problems with complicated constraints.

907

A mathematical model for automatic differentiation in machine learning

In this work we articulate the relationships between differentiation of programs as implemented in practice, and differentiation of nonsmooth functions.

908

Unsupervised Text Generation by Learning from Search

In this work, we propose TGLS, a novel framework for unsupervised Text Generation by Learning from Search.

909

Learning Compositional Rules via Neural Program Synthesis

In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples.

910

Incorporating BERT into Parallel Sequence Decoding with Adapters

In this paper, we propose to address this problem by taking two different BERT models as the encoder and decoder respectively, and fine-tuning them by introducing simple and lightweight adapter modules, which are inserted between BERT layers and tuned on the task-specific dataset.

911

Estimating Fluctuations in Neural Representations of Uncertain Environments

In this paper, we develop a new state-space modeling framework to address two important issues related to remapping.

912

Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for Semantic Segmentation

In this paper, we investigate open compound domain adaptation (OCDA), which deals with mixed and novel situations at the same time, for semantic segmentation.

913

SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm

We present $\SURF$, an algorithm for approximating distributions by piecewise polynomials.

914

Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks

In this work, we reveal that, under specific conditions, NGD with approximate Fisher information achieves the same fast convergence to global minima as exact NGD.

915

General Transportability of Soft Interventions: Completeness Results

In this paper, we extend transportability theory to encompass these more complex types of interventions, which are known as "soft," both relative to the input as well as the target distribution of the analysis.

916

GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass.

917

Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing

In this work we propose a graph-based learning framework to train models with provable robustness to adversarial perturbations.

918

SCOP: Scientific Control for Reliable Neural Network Pruning

This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.

919

Provably Consistent Partial-Label Learning

In this paper, we propose the first generation model of candidate label sets, and develop two PLL methods that are guaranteed to be provably consistent, i.e., one is risk-consistent and the other is classifier-consistent.

920

Robust, Accurate Stochastic Optimization for Variational Inference

Motivated by recent theory, we propose a simple and parallel way to improve SGD estimates for variational inference.

921

Discovering conflicting groups in signed networks

In this paper we study the problem of detecting $k$ conflicting groups in a signed network.

922

Learning Some Popular Gaussian Graphical Models without Condition Number Bounds

Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions.

923

Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding

The purpose of this paper is to develop Austen plots, a sensitivity analysis tool to aid such judgments by making it easier to reason about potential bias induced by unobserved confounding.

924

Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions

We consider a covariate shift problem where one has access to several different training datasets for the same learning problem and a small validation set which possibly differs from all the individual training distributions.

925

Understanding Double Descent Requires A Fine-Grained Bias-Variance Decomposition

To enable fine-grained analysis, we describe an interpretable, symmetric decomposition of the variance into terms associated with the randomness from sampling, initialization, and the labels.

926

VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain

In this paper, we fill this gap by proposing novel self- and semi-supervised learning frameworks for tabular data, which we refer to collectively as VIME (Value Imputation and Mask Estimation).

927

The Smoothed Possibility of Social Choice

We develop a framework that leverages the smoothed complexity analysis by Spielman and Teng to circumvent paradoxes and impossibility theorems in social choice, motivated by modern applications of social choice powered by AI and ML.

928

A Decentralized Parallel Algorithm for Training Generative Adversarial Nets

In this paper, we address this difficulty by designing the \textbf{first gradient-based decentralized parallel algorithm} which allows workers to have multiple rounds of communications in one iteration and to update the discriminator and generator simultaneously, and this design makes it amenable for the convergence analysis of the proposed decentralized algorithm.

929

Phase retrieval in high dimensions: Statistical and computational phase transitions

We consider the phase retrieval problem of reconstructing a $n$-dimensional real or complex signal $\mathbf{X}^\star$ from $m$ (possibly noisy) observations $Y_\mu = | \sum_{i=1}^n \Phi_{\mu i} X^{\star}_i/\sqrt{n}|$, for a large class of correlated real and complex random sensing matrices $\mathbf{\Phi}$, in a high-dimensional setting where $m,n\to\infty$ while $\alpha = m/n=\Theta(1)$.

930

Fair Performance Metric Elicitation

Specifically, we propose a novel strategy to elicit group-fair performance metrics for multiclass classification problems with multiple sensitive groups that also includes selecting the trade-off between predictive performance and fairness violation.

931

Hybrid Variance-Reduced SGD Algorithms For Minimax Problems with Nonconvex-Linear Function

We develop a novel and single-loop variance-reduced algorithm to solve a class of stochastic nonconvex-convex minimax problems involving a nonconvex-linear objective function, which has various applications in different fields such as ma- chine learning and robust optimization.

932

Belief-Dependent Macro-Action Discovery in POMDPs using the Value of Information

Here, we present a method for extracting belief-dependent, variable-length macro-actions directly from a low-level POMDP model.

933

Soft Contrastive Learning for Visual Localization

In this paper, we show why such divisions are problematic for learning localization features.

934

Fine-Grained Dynamic Head for Object Detection

To this end, we propose a fine-grained dynamic head to conditionally select a pixel-level combination of FPN features from different scales for each instance, which further releases the ability of multi-scale feature representation.

935

LoCo: Local Contrastive Representation Learning

In this work, we discover that by overlapping local blocks stacking on top of each other, we effectively increase the decoder depth and allow upper blocks to implicitly send feedbacks to lower blocks.

936

Modeling and Optimization Trade-off in Meta-learning

We introduce and rigorously define the trade-off between accurate modeling and optimization ease in meta-learning.

937

SnapBoost: A Heterogeneous Boosting Machine

In this work, we study a Heterogeneous Newton Boosting Machine (HNBM) in which the base hypothesis class may vary across boosting iterations.

938

On Adaptive Distance Estimation

We provide a static data structure for distance estimation which supports {\it adaptive} queries.

939

Stage-wise Conservative Linear Bandits

For this problem, we present two novel algorithms, stage-wise conservative linear Thompson Sampling (SCLTS) and stage-wise conservative linear UCB (SCLUCB), that respect the baseline constraints and enjoy probabilistic regret bounds of order $\mathcal{O}(\sqrt{T} \log^{3/2}T)$ and $\mathcal{O}(\sqrt{T} \log T)$, respectively.

940

RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces

We present RELATE, a model that learns to generate physically plausible scenes and videos of multiple interacting objects.

941

Metric-Free Individual Fairness in Online Learning

We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly.

942

GreedyFool: Distortion-Aware Sparse Adversarial Attack

In this paper, we propose a novel two-stage distortion-aware greedy-based method dubbed as ”GreedyFool".

943

VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data

We propose an extension of variational autoencoders (VAEs) called VAEM to handle such heterogeneous data.

944

RetroXpert: Decompose Retrosynthesis Prediction Like A Chemist

In this paper, we devise a novel template-free algorithm for automatic retrosynthetic expansion inspired by how chemists approach retrosynthesis prediction.

945

Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.

946

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

In this paper, we introduce a novel model-based approach that interleaves discovering new states from $s_0$ and improving the accuracy of a model estimate that is used to compute goal-conditioned policies.

947

TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning

In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning.

948

RD$^2$: Reward Decomposition with Representation Decomposition

In this work, we propose a set of novel reward decomposition principles by constraining uniqueness and compactness of different state features/representations relevant to different sub-rewards.

949

Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory.

950

Fairness constraints can help exact inference in structured prediction

We find that, in contrast to the known trade-offs between fairness and model performance, the addition of the fairness constraint improves the probability of exact recovery.

951

Instance-based Generalization in Reinforcement Learning

We propose training a shared belief representation over an ensemble of specialized policies, from which we compute a consensus policy that is used for data collection, disallowing instance-speci?c exploitation.

952

Smooth And Consistent Probabilistic Regression Trees

We propose here a generalization of regression trees, referred to as Probabilistic Regression (PR) trees, that adapt to the smoothness of the prediction function relating input and output variables while preserving the interpretability of the prediction and being robust to noise.

953

Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

In this paper, we introduce a novel method to perform statistical inference on the significance of the CPs, estimated by a Dynamic Programming (DP)-based optimal CP detection algorithm.

954

Factorized Neural Processes for Neural Processes: K-Shot Prediction of Neural Responses

We overcome this limitation by formulating the problem as $K$-shot prediction to directly infer a neuron’s tuning function from a small set of stimulus-response pairs using a Neural Process.

955

Winning the Lottery with Continuous Sparsification

We revisit fundamental aspects of pruning algorithms, pointing out missing ingredients in previous approaches, and develop a method, Continuous Sparsification, which searches for sparse networks based on a novel approximation of an intractable l0 regularization.

956

Adversarial robustness via robust low rank representations

In this work we highlight the benefits of natural low rank representations that often exist for real data such as images, for training neural networks with certified robustness guarantees.

957

Joints in Random Forests

In this paper, we demonstrate that DTs and RFs can naturally be interpreted as generative models, by drawing a connection to Probabilistic Circuits, a prominent class of tractable probabilistic models.

958

Compositional Generalization by Learning Analytical Expressions

Inspired by work in cognition which argues compositionality can be captured by variable slots with symbolic functions, we present a refreshing view that connects a memory-augmented neural model with analytical expressions, to achieve compositional generalization.

959

JAX MD: A Framework for Differentiable Physics

We introduce JAX MD, a software package for performing differentiable physics simulations with a focus on molecular dynamics.

960

An implicit function learning approach for parametric modal regression

In this work, we propose a parametric modal regression algorithm.

961

SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images

In this paper, we address this issue and propose SDF-SRN, an approach that requires only a single view of objects at training time, offering greater utility for real-world scenarios.

962

Coresets for Robust Training of Deep Neural Networks against Noisy Labels

To tackle this challenge, we propose a novel approach with strong theoretical guarantees for robust training of neural networks trained with noisy labels.

963

Adapting to Misspecification in Contextual Bandits

We introduce a new family of oracle-efficient algorithms for $\varepsilon$-misspecified contextual bandits that adapt to unknown model misspecification—both for finite and infinite action settings.

964

Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters

In this paper, we characterize the convergence rate and the computational complexity for ANIL under two representative inner-loop loss geometries, i.e., strongly-convexity and nonconvexity.

965

MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures

To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data.

966

Learning to solve TV regularised problems with unrolled algorithms

In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.

967

Object-Centric Learning with Slot Attention

In this paper, we present the Slot Attention module, an architectural component that interfaces with perceptual representations such as the output of a convolutional neural network and produces a set of task-dependent abstract representations which we call slots.

968

Improving robustness against common corruptions by covariate shift adaptation

The key insight is that in many scenarios, multiple unlabeled examples of the corruptions are available and can be used for unsupervised online adaptation.

969

Deep Smoothing of the Implied Volatility Surface

We present a neural network (NN) approach to fit and predict implied volatility surfaces (IVSs).

970

Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations

In this work, we advance the WMI framework on both the theoretical and algorithmic side.

971

Provable Online CP/PARAFAC Decomposition of a Structured Tensor via Dictionary Learning

We consider the problem of factorizing a structured 3-way tensor into its constituent Canonical Polyadic (CP) factors.

972

Look-ahead Meta Learning for Continual Learning

In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory.

973

A polynomial-time algorithm for learning nonparametric causal graphs

We establish finite-sample guarantees for a polynomial-time algorithm for learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from data.

974

Sparse Learning with CART

This paper aims to study the statistical properties of regression trees constructed with CART.

975

Proximal Mapping for Deep Regularization

In contrast to prevalent methods that optimize them indirectly through model weights, we propose inserting proximal mapping as a new layer to the deep network, which directly and explicitly produces well regularized hidden layer outputs.

976

Identifying Causal-Effect Inference Failure with Uncertainty-Aware Models

We introduce a practical approach for integrating uncertainty estimation into a class of state-of-the-art neural network methods used for individual-level causal estimates.

977

Hierarchical Granularity Transfer Learning

In this paper, we introduce a new task, named Hierarchical Granularity Transfer Learning (HGTL), to recognize sub-level categories with basic-level annotations and semantic descriptions for hierarchical categories.

978

Deep active inference agents using Monte-Carlo methods

In this paper, we present a neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling.

979

Consistent Estimation of Identifiable Nonparametric Mixture Models from Grouped Observations

This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations.

980

Manifold structure in graph embeddings

However, this paper shows that existing random graph models, including graphon and other latent position models, predict the data should live near a much lower-dimensional set.

981

Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier with Application to Real-Time Information Filtering on the Web

We propose new algorithms that generalize the learned Bloom filter by using the complete spectrum of the score regions.

982

MCUNet: Tiny Deep Learning on IoT Devices

We propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers.

983

In search of robust measures of generalization

Focusing on generalization bounds, this work addresses the question of how to evaluate such bounds empirically.

984

Task-agnostic Exploration in Reinforcement Learning

We present an efficient task-agnostic RL algorithm, \textsc{UCBZero}, that finds $\epsilon$-optimal policies for $N$ arbitrary tasks after at most $\tilde O(\log(N)H^5SA/\epsilon^2)$ exploration episodes.

985

Multi-task Additive Models for Robust Estimation and Automatic Structure Discovery

To tackle this problem, we propose a new class of additive models, called Multi-task Additive Models (MAM), by integrating the mode-induced metric, the structure-based regularizer, and additive hypothesis spaces into a bilevel optimization framework.

986

Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration

We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function, which is revealed only once learning has completed.

987

Softmax Deep Double Deterministic Policy Gradients

In this paper, we propose to use the Boltzmann softmax operator for value function estimation in continuous control.

988

Online Decision Based Visual Tracking via Reinforcement Learning

Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning.

989

Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity

In this paper, we propose a new training strategy which replaces these estimators by an exact yet efficient marginalization.

990

DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I.

991

Distributional Robustness with IPMs and links to Regularization and GANs

We extend this line of work for the purposes of understanding robustness via regularization by studying uncertainty sets constructed with Integral Probability Metrics (IPMs) – a large family of divergences including the MMD, Total Variation and Wasserstein distances.

992

A shooting formulation of deep learning

To this end, we introduce a shooting formulation which shifts the perspective from parameterizing a network layer-by-layer to parameterizing over optimal networks described only by a set of initial conditions.

993

CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances

In this paper, we propose a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations.

994

Learning Implicit Credit Assignment for Cooperative Multi-Agent Reinforcement Learning

We present a multi-agent actor-critic method that aims to implicitly address the credit assignment problem under fully cooperative settings.

995

MATE: Plugging in Model Awareness to Task Embedding for Meta Learning

To allow for better generalization, we propose a novel task representation called model-aware task embedding (MATE) that incorporates not only the data distributions of different tasks, but also the complexity of the tasks through the models used.

996

Restless-UCB, an Efficient and Low-complexity Algorithm for Online Restless Bandits

In Restless-UCB, we present a novel method to construct offline instances, which only requires $O(N)$ time-complexity ($N$ is the number of arms) and is exponentially better than the complexity of existing learning policy.

997

Predictive Information Accelerates Learning in RL

We hypothesize that capturing the predictive information is useful in RL, since the ability to model what will happen next is necessary for success on many tasks.

998

Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization

In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions.

999

High-Fidelity Generative Image Compression

We extensively study how to combine Generative Adversarial Networks and learned compression to obtain a state-of-the-art generative lossy compression system.

1000

A Statistical Mechanics Framework for Task-Agnostic Sample Design in Machine Learning

In this paper, we present a statistical mechanics framework to understand the effect of sampling properties of training data on the generalization gap of machine learning (ML) algorithms.

1001

Counterexample-Guided Learning of Monotonic Neural Networks

We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time.

1002

A Novel Approach for Constrained Optimization in Graphical Models

We propose a class of approximate algorithms for solving this problem.

1003

Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology

In this paper, we prove that, for deep networks, a single layer of width N following the input layer suffices to ensure a similar guarantee.

1004

On the Trade-off between Adversarial and Backdoor Robustness

In this paper, we conduct experiments to study whether adversarial robustness and backdoor robustness can affect each other and find a trade-off—by increasing the robustness of a network to adversarial examples, the network becomes more vulnerable to backdoor attacks.

1005

Implicit Graph Neural Networks

To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined "state" vectors.

1006

Rethinking Importance Weighting for Deep Learning under Distribution Shift

In this paper, we rethink IW and theoretically show it suffers from a circular dependency: we need not only WE for WC, but also WC for WE where a trained deep classifier is used as the feature extractor (FE).

1007

Guiding Deep Molecular Optimization with Genetic Exploration

In this paper, we propose genetic expert-guided learning (GEGL), a simple yet novel framework for training a deep neural network (DNN) to generate highly-rewarding molecules.

1008

Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks

We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down error backpropagation across two types of inter-neuron and intra-neuron dependencies and leads to improved temporal learning precision.

1009

TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation

In this paper, we explore the temporal semantic structures of sign videos to learn more discriminative features.

1010

Neural Topographic Factor Analysis for fMRI Data

We propose Neural Topographic Factor Analysis (NTFA), a probabilistic factor analysis model that infers embeddings for participants and stimuli.

1011

Neural Architecture Generator Optimization

In this work we 1) are the first to investigate casting NAS as a problem of finding the optimal network generator and 2) we propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters.

1012

A Bandit Learning Algorithm and Applications to Auction Design

In this paper, we introduce a new notion of $(\lambda,\mu)$-concave functions and present a bandit learning algorithm that achieves a performance guarantee which is characterized as a function of the concavity parameters $\lambda$ and $\mu$.

1013

MetaPoison: Practical General-purpose Clean-label Data Poisoning

We propose MetaPoison, a first-order method that approximates the bilevel problem via meta-learning and crafts poisons that fool neural networks.

1014

Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation

As our key contribution, we develop a simple, iterative learning algorithm that finds $\epsilon$-optimal $Q$-function with sample complexity of $\widetilde{O}(\frac{1}{\epsilon^{\max(d_1, d_2)+2}})$ when the optimal $Q$-function has low rank $r$ and the discounting factor $\gamma$ is below a certain threshold.

1015

Training Generative Adversarial Networks with Limited Data

We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

1016

Deeply Learned Spectral Total Variation Decomposition

In this paper, we present a neural network approximation of a non-linear spectral decomposition.

1017

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

In this paper, we explore from an orthogonal direction: how to fractionally squeeze out more training cost savings from the most redundant bit level, progressively along the training trajectory and dynamically per input.

1018

Improving Neural Network Training in Low Dimensional Random Bases

We propose re-drawing the random subspace at each step, which yields significantly better performance.

1019

Safe Reinforcement Learning via Curriculum Induction

This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor that saves the agent from violating constraints during learning.

1020

Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning

We study KL regularization within an approximate value iteration scheme and show that it implicitly averages q-values.

1021

How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19?

To answer this question, we investigate 2 state-of-the-art NPI effectiveness models and propose 6 variants that make different structural assumptions.

1022

Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses

To this end, we propose a novel model agnostic framework called Actionable Recourse Summaries (AReS) to construct global counterfactual explanations which provide an interpretable and accurate summary of recourses for the entire population.

1023

Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization

We consider a commonly studied supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer non-linear neural network with random iid inputs.

1024

Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method

We consider the classical setting of optimizing a nonsmooth Lipschitz continuous convex function over a convex constraint set, when having access to a (stochastic) first-order oracle (FO) for the function and a projection oracle (PO) for the constraint set.

1025

PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs.

1026

Few-Cost Salient Object Detection with Adversarial-Paced Learning

To address this problem, this paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only, thus dramatically alleviating human labor in training models.

1027

Minimax Estimation of Conditional Moment Models

We introduce a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game between a modeler who is optimizing over the hypothesis space of the target model and an adversary who identifies violating moments over a test function space.

1028

Causal Imitation Learning With Unobserved Confounders

In this paper, we relax this assumption and study imitation learning when sensory inputs of the learner and the expert differ.

1029

Your GAN is Secretly an Energy-based Model and You Should Use Discriminator Driven Latent Sampling

We show that the sum of the implicit generator log-density $\log p_g$ of a GAN with the logit score of the discriminator defines an energy function which yields the true data density when the generator is imperfect but the discriminator is optimal, thus making it possible to improve on the typical generator (with implicit density $p_g$).

1030

Learning Black-Box Attackers with Transferable Priors and Query Feedback

By combining transferability-based and query-based black-box attack, we propose a surprisingly simple baseline approach (named SimBA++) using the surrogate model, which significantly outperforms several state-of-the-art methods.

1031

Locally Differentially Private (Contextual) Bandits Learning

We study locally differentially private (LDP) bandits learning in this paper.

1032

Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax

We propose a modular and more flexible family of reparameterizable distributions where Gaussian noise is transformed into a one-hot approximation through an invertible function.

1033

Kernel Based Progressive Distillation for Adder Neural Networks

In this paper, we present a novel method for further improving the performance of ANNs without increasing the trainable parameters via a progressive kernel based knowledge distillation (PKKD) method.

1034

Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization

We propose to remove the burden of the policy optimization steps by leveraging a novel discriminator formulation.

1035

Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space

In this paper, we examine the diversity of teacher models in the gradient space and regard the ensemble knowledge distillation as a multi-objective optimization problem so that we can determine a better optimization direction for the training of student network.

1036

The Wasserstein Proximal Gradient Algorithm

In this work, we propose a Forward Backward (FB) discretization scheme that can tackle the case where the objective function is the sum of a smooth and a nonsmooth geodesically convex terms.

1037

Universally Quantized Neural Compression

We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985).

1038

Temporal Variability in Implicit Online Learning

In this work, we shed light on this behavior carrying out a careful regret analysis.

1039

Investigating Gender Bias in Language Models Using Causal Mediation Analysis

We propose a methodology grounded in the theory of causal mediation analysis for interpreting which parts of a model are causally implicated in its behavior.

1040

Off-Policy Imitation Learning from Observations

In this work, we propose a sample-efficient LfO approach which enables off-policy optimization in a principled manner.

1041

Escaping Saddle-Point Faster under Interpolation-like Conditions

In this paper, we show that under over-parametrization several standard stochastic optimization algorithms escape saddle-points and converge to local-minimizers much faster.

1042

Mat?rn Gaussian Processes on Riemannian Manifolds

In this work, we propose techniques for computing the kernels of these processes on compact Riemannian manifolds via spectral theory of the Laplace-Beltrami operator in a fully constructive manner, thereby allowing them to be trained via standard scalable techniques such as inducing point methods.

1043

Improved Techniques for Training Score-Based Generative Models

We provide a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets.

1044

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler.

1045

A Maximum-Entropy Approach to Off-Policy Evaluation in Average-Reward MDPs

In a more general setting, when the feature dynamics are approximately linear and for arbitrary rewards, we propose a new approach for estimating stationary distributions with function approximation.

1046

Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients

This paper presents Enzyme, a high-performance automatic differentiation (AD) compiler plugin for the LLVM compiler framework capable of synthesizing gradients of statically analyzable programs expressed in the LLVM intermediate representation (IR).

1047

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?

In this work, we find empirically that pre-training architecture representations using only neural architectures without their accuracies as labels improves the downstream architecture search efficiency.

1048

Value-driven Hindsight Modelling

We develop an approach for representation learning in RL that sits in between these two extremes: we propose to learn what to model in a way that can directly help value prediction.

1049

Dynamic Regret of Convex and Smooth Functions

Specifically, we propose novel online algorithms that are capable of leveraging smoothness and replace the dependence on $T$ in the dynamic regret by problem-dependent quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of the previous two terms.

1050

On Convergence of Nearest Neighbor Classifiers over Feature Transformations

This leads to an emerging gap between our theoretical understanding of kNN and its practical applications. In this paper, we take a first step towards bridging this gap.

1051

Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments

We then present a (randomized) algorithm for reviewer assignment that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair.

1052

Contrastive learning of global and local features for medical image segmentation with limited annotations

In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues.

1053

Self-Supervised Graph Transformer on Large-Scale Molecular Data

To address them both, we propose a novel framework, GROVER, which stands for Graph Representation frOm self-superVised mEssage passing tRansformer.

1054

Generative Neurosymbolic Machines

In this paper, we propose Generative Neurosymbolic Machines, a generative model that combines the benefits of distributed and symbolic representations to support both structured representations of symbolic components and density-based generation.

1055

How many samples is a good initial point worth in Low-rank Matrix Recovery?

In this paper, we quantify the relationship between the quality of the initial guess and the corresponding reduction in data requirements.

1056

CSER: Communication-efficient SGD with Error Reset

We propose a novel SGD variant: \underline{C}ommunication-efficient \underline{S}GD with \underline{E}rror \underline{R}eset, or \underline{CSER}.

1057

Efficient estimation of neural tuning during naturalistic behavior

We develop efficient procedures for parameter learning by optimizing a generalized cross-validation score and infer marginal confidence bounds for the contribution of each feature to neural responses.

1058

High-recall causal discovery for autocorrelated time series with latent confounders

We present a new method for linear and nonlinear, lagged and contemporaneous constraint-based causal discovery from observational time series in the presence of latent confounders.

1059

Forget About the LiDAR: Self-Supervised Depth Estimators with MED Probability Volumes

We present extensive experimental results on the KITTI, CityScapes, and Make3D datasets to verify our method’s effectiveness.

1060

Joint Contrastive Learning with Infinite Possibilities

This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.

1061

Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication Time

In this paper, we demonstrate a novel algorithm that achieves the same statistical guarantees, but which runs in time $\widetilde{O} (T(N, d) \log \kappa)$.

1062

Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models

Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs).

1063

GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators

To this end, we propose Gradient-sanitized Wasserstein Generative Adversarial Networks (GS-WGAN), which allows releasing a sanitized form of the sensitive data with rigorous privacy guarantees.

1064

SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows

In this paper, we introduce SurVAE Flows: A modular framework of composable transformations that encompasses VAEs and normalizing flows.

1065

Learning Causal Effects via Weighted Empirical Risk Minimization

In this paper, we develop a learning framework that marries two families of methods, benefiting from the generality of the causal identification theory and the effectiveness of the estimators produced based on the principle of ERM.

1066

Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes

We introduce a new scalable approximation for Gaussian processes with provable guarantees which holds simultaneously over its entire parameter space.

1067

Incorporating Interpretable Output Constraints in Bayesian Neural Networks

We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks.

1068

Multi-Stage Influence Function

In this paper, we develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data.

1069

Probabilistic Fair Clustering

In this paper, we generalize this by assuming imperfect knowledge of group membership through probabilistic assignments, and present algorithms in this more general setting with approximation ratio guarantees.

1070

Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty

In this paper, we introduce stochastic segmentation networks (SSNs), an efficient probabilistic method for modelling aleatoric uncertainty with any image segmentation network architecture.

1071

ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA

We consider the identifiability theory of probabilistic models and establish sufficient conditions under which the representations learnt by a very broad family of conditional energy-based models are unique in function space, up to a simple transformation.

1072

Testing Determinantal Point Processes

In this paper, we investigate DPPs from a new perspective: property testing of distributions.

1073

CogLTX: Applying BERT to Long Texts

Founded on the cognitive theory stemming from Baddeley, our CogLTX framework identifies key sentences by training a judge model, concatenates them for reasoning and enables multi-step reasoning via rehearsal and decay.

1074

f-GAIL: Learning f-Divergence for Generative Adversarial Imitation Learning

In this work, we propose f-GAIL – a new generative adversarial imitation learning model – that automatically learns a discrepancy measure from the f-divergence family as well as a policy capable of producing expert-like behaviors.

1075

Non-parametric Models for Non-negative Functions

In this paper we provide the first model for non-negative functions which benefits from the same good properties of linear models.

1076

Uncertainty Aware Semi-Supervised Learning on Graph Data

In this work, we propose a multi-source uncertainty framework using a GNN that reflects various types of predictive uncertainties in both deep learning and belief/evidence theory domains for node classification predictions.

1077

ConvBERT: Improving BERT with Span-based Dynamic Convolution

We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies.

1078

Practical No-box Adversarial Attacks against DNNs

We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective.

1079

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model

We investigate the sample efficiency of reinforcement learning in a ?-discounted infinite-horizon Markov decision process (MDP) with state space S and action space A, assuming access to a generative model.

1080

Walking in the Shadow: A New Perspective on Descent Directions for Constrained Minimization

In this work, we attempt to demystify the impact of movement in these directions towards attaining constrained minimizers.

1081

Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks

We propose a new method for this estimation problem combining sampling and analytic approximation steps.

1082

Reward Propagation Using Graph Convolutional Networks

We propose a new framework for learning potential functions by leveraging ideas from graph representation learning.

1083

LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration

Our main contribution is LoopReg, an end-to-end learning framework to register a corpus of scans to a common 3D human model.

1084

Fully Dynamic Algorithm for Constrained Submodular Optimization

We study this classic problem in the fully dynamic setting, where elements can be both inserted and removed.

1085

Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation

In this paper, we resolve these issues by deriving a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications.

1086

Autofocused oracles for model-based design

In particular, we (i) formalize the data-driven design problem as a non-zero-sum game, (ii) develop a principled strategy for retraining the regression model as the design algorithm proceeds—what we refer to as autofocusing, and (iii) demonstrate the promise of autofocusing empirically.

1087

Debiasing Averaged Stochastic Gradient Descent to handle missing values

We propose an averaged stochastic gradient algorithm handling missing values in linear models.

1088

Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning

In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization.

1089

CompRess: Self-Supervised Learning by Compressing Representations

In this work, instead of designing a new pseudo task for self-supervised learning, we develop a model compression method to compress an already learned, deep self-supervised model (teacher) to a smaller one (student).

1090

Sample complexity and effective dimension for regression on manifolds

Manifold models arise in a wide variety of modern machine learning problems, and our goal is to help understand the effectiveness of various implicit and explicit dimensionality-reduction methods that exploit manifold structure.

1091

The phase diagram of approximation rates for deep neural networks

We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results.

1092

Timeseries Anomaly Detection using Temporal Hierarchical One-Class Network

In this paper, we propose the Temporal Hierarchical One-Class (THOC) network, a temporal one-class classification model for timeseries anomaly detection.

1093

EcoLight: Intersection Control in Developing Regions Under Extreme Budget and Network Constraints

This paper presents EcoLight intersection control for developing regions, where budget is constrained and network connectivity is very poor.

1094

Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN

Inspired by the theory that visual features are hierarchically represented in cortex, we propose to break the complex visual signals into multi-level components and decode each component separately.

1095

Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design

We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.

1096

A Spectral Energy Distance for Parallel Speech Synthesis

Here, we propose a new learning method that allows us to train highly parallel models of speech, without requiring access to an analytical likelihood function.

1097

Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations

Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models.

1098

Learning from Positive and Unlabeled Data with Arbitrary Positive Shift

Our key insight is that only the negative class’s distribution need be fixed.

1099

Deep Energy-based Modeling of Discrete-Time Physics

In this study, we propose a deep energy-based physical model that admits a specific differential geometric structure.

1100

Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning

In this paper, we consider in particular how to characterise visual groupings discovered automatically by deep neural networks, starting with state-of-the-art clustering methods.

1101

Self-Learning Transformations for Improving Gaze and Head Redirection

In this paper we propose a novel generative model for images of faces, that is capable of producing high-quality images under fine-grained control over eye gaze and head orientation angles.

1102

Language-Conditioned Imitation Learning for Robot Manipulation Tasks

Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning.

1103

POMDPs in Continuous Time and Discrete Spaces

In this paper, we give a mathematical description of a continuous-time partial observable Markov decision process (POMDP).

1104

Exemplar Guided Active Learning

We describe an active learning approach that (1) explicitly searches for rare classes by leveraging the contextual embedding spaces provided by modern language models, and (2) incorporates a stopping rule that ignores classes once we prove that they occur below our target threshold with high probability.

1105

Grasp Proposal Networks: An End-to-End Solution for Visual Learning of Robotic Grasps

To this end, we propose in this work a novel, end-to-end \emph{Grasp Proposal Network (GPNet)}, to predict a diverse set of 6-DOF grasps for an unseen object observed from a single and unknown camera view.

1106

Node Embeddings and Exact Low-Rank Representations of Complex Networks

In this work we show that the results of Seshadhri et al. are intimately connected to the model they use rather than the low-dimensional structure of complex networks.

1107

Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications

In this paper, we deepen the analysis of continuous time Fictitious Play learning algorithm to the consideration of various finite state Mean Field Game settings (finite horizon, $\gamma$-discounted), allowing in particular for the introduction of an additional common noise.

1108

Steering Distortions to Preserve Classes and Neighbors in Supervised Dimensionality Reduction

The supervised mapping method introduced in the present paper, called ClassNeRV, proposes an original stress function that takes class annotation into account and evaluates embedding quality both in terms of false neighbors and missed neighbors.

1109

On Infinite-Width Hypernetworks

In this work, we study wide over-parameterized hypernetworks.

1110

Interferobot: aligning an optical interferometer by a reinforcement learning agent

Here we train an RL agent to align a Mach-Zehnder interferometer, which is an essential part of many optical experiments, based on images of interference fringes acquired by a monocular camera.

1111

Program Synthesis with Pragmatic Communication

This work introduces a new inductive bias derived by modeling the program synthesis task as rational communication, drawing insights from recursive reasoning models of pragmatics.

1112

Principal Neighbourhood Aggregation for Graph Nets

Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator).

1113

Reliable Graph Neural Networks via Robust Aggregation

We propose a robust aggregation function motivated by the field of robust statistics.

1114

Instance Selection for GANs

In this work we propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place.

1115

Linear Disentangled Representations and Unsupervised Action Estimation

In this work we empirically show that linear disentangled representations are not present in standard VAE models and that they instead require altering the loss landscape to induce them.

1116

Video Frame Interpolation without Temporal Priors

In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time.

1117

Learning compositional functions via multiplicative weight updates

This paper proves that multiplicative weight updates satisfy a descent lemma tailored to compositional functions.

1118

Sample Complexity of Uniform Convergence for Multicalibration

In this work, we address the multicalibration error and decouple it from the prediction error.

1119

Differentiable Neural Architecture Search in Equivalent Space with Exploration Enhancement

Differently, this paper utilizes a variational graph autoencoder to injectively transform the discrete architecture space into an equivalently continuous latent space, to resolve the incongruence.

1120

The interplay between randomness and structure during learning in RNNs

We show how the low-dimensional task structure leads to low-rank changes to connectivity, and how random initial connectivity facilitates learning.

1121

A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks

In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a “kernel-like” behavior.

1122

Instance-wise Feature Grouping

In this paper, we formally define two types of redundancies using information theory: \textit{Representation} and \textit{Relevant redundancies}.

1123

Robust Disentanglement of a Few Factors at a Time

Building on top of this observation we introduce the recursive rPU-VAE approach.

1124

PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning

This work introduces the the POLICY COVER GUIDED POLICY GRADIENT (PC- PG) algorithm, which provably balances the exploration vs. exploitation tradeoff using an ensemble of learned policies (the policy cover).

1125

Group Contextual Encoding for 3D Point Clouds

In this work, we extended the contextual encoding layer that was originally designed for 2D tasks to 3D Point Cloud scenarios.

1126

Latent Bandits Revisited

In this work, we propose general algorithms for latent bandits, based on both upper confidence bounds and Thompson sampling.

1127

Is normalization indispensable for training deep neural network?

In this paper, we study what would happen when normalization layers are removed from the network, and show how to train deep neural networks without normalization layers and without performance degradation.

1128

Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions

We study the dynamics of optimization and the generalization properties of one-hidden layer neural networks with quadratic activation function in the overparametrized regime where the layer width m is larger than the input dimension d.

1129

Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks

In this work, we aim to learn general post-hoc calibration functions that can preserve the top-k predictions of any deep network.

1130

Linear Time Sinkhorn Divergences using Positive Features

We propose to use instead ground costs of the form $c(x,y)=-\log\dotp{\varphi(x)}{\varphi(y)}$ where $\varphi$ is a map from the ground space onto the positive orthant $\RR^r_+$, with $r\ll n$.

1131

VarGrad: A Low-Variance Gradient Estimator for Variational Inference

We analyse the properties of an unbiased gradient estimator of the ELBO for variational inference, based on the score function method with leave-one-out control variates.

1132

A Convolutional Auto-Encoder for Haplotype Assembly and Viral Quasispecies Reconstruction

This paper proposes a read clustering method based on a convolutional auto-encoder designed to first project sequenced fragments to a low-dimensional space and then estimate the probability of the read origin using learned embedded features.

1133

Promoting Stochasticity for Expressive Policies via a Simple and Efficient Regularization Method

To tackle this problem, we propose a novel regularization method that is compatible with a broad range of expressive policy architectures.

1134

Adversarial Counterfactual Learning and Evaluation for Recommender System

We propose a principled solution by introducing a minimax empirical risk formulation.

1135

Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control

We propose a novel algorithm for learning stable LDSs.

1136

Evolving Normalization-Activation Layers

Normalization layers and activation functions are fundamental components in deep networks and typically co-locate with each other. Here we propose to design them using an automated approach.

1137

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

To mitigate these issues, we propose a new compression technique, Scalable Sparsified Gradient Compression (ScaleComp), that (i) leverages similarity in the gradient distribution amongst learners to provide a commutative compressor and keep communication cost constant to worker number and (ii) includes low-pass filter in local gradient accumulations to mitigate the impacts of large batch size training and significantly improve scalability.

1138

RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder

This paper presents an attention-based decoder module similar as that in Transformer~\cite{vaswani2017attention} to bridge other representations into a typical object detector built on a single representation format, in an end-to-end fashion.

1139

Efficient Learning of Discrete Graphical Models

In this work, we provide the first sample-efficient method based on the Interaction Screening framework that allows one to provably learn fully general discrete factor models with node-specific discrete alphabets and multi-body interactions, specified in an arbitrary basis.

1140

Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals

We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals.

1141

Neurosymbolic Transformers for Multi-Agent Communication

We propose a novel algorithm that synthesizes a control policy that combines a programmatic communication policy used to generate the communication graph with a transformer policy network used to choose actions.

1142

Fairness in Streaming Submodular Maximization: Algorithms and Hardness

In this work we address the question: Is it possible to create fair summaries for massive datasets?

1143

Smoothed Geometry for Robust Attribution

To mitigate these attacks in practice, we propose an inexpensive regularization method that promotes these conditions in DNNs, as well as a stochastic smoothing technique that does not require re-training.

1144

Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary Seminorms

We prove that if NPCs use a dissimilarity measure induced by a seminorm, the hypothesis margin is a tight lower bound on the size of adversarial attacks and can be calculated in constant time—this provides the first adversarial robustness certificate calculable in reasonable time.

1145

Multi-agent active perception with prediction rewards

In this paper, we model multi-agent active perception as a decentralized partially observable Markov decision process (Dec-POMDP) with a convex centralized prediction reward.

1146

A Local Temporal Difference Code for Distributional Reinforcement Learning

Here, we introduce the Laplace code: a local temporal difference code for distributional reinforcement learning that is representationally powerful and computationally straightforward.

1147

Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions

Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime O(D) that is linear in the dimension D of the input data.

1148

CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations

We propose CaSPR, a method to learn object-centric Canonical Spatiotemporal Point Cloud Representations of dynamically moving or evolving objects.

1149

Deep Automodulators

We introduce a new category of generative autoencoders called automodulators.

1150

Convolutional Tensor-Train LSTM for Spatio-Temporal Learning

In this paper, we propose a higher-order convolutional LSTM model that can efficiently learn these correlations, along with a succinct representations of the history.

1151

The Potts-Ising model for discrete multivariate data

We introduce a variation on the Potts model that allows for general categorical marginals and Ising-type multivariate dependence.

1152

Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech

In this work we construct interpretable multi-timescale representations by forcing individual units in an LSTM LM to integrate information over specific temporal scales.

1153

Group-Fair Online Allocation in Continuous Time

In order to address these applications, we consider continuous-time online learning problem with fairness considerations, and present a novel framework based on continuous-time utility maximization.

1154

Decentralized TD Tracking with Linear Function Approximation and its Finite-Time Analysis

The present contribution deals with decentralized policy evaluation in multi-agent Markov decision processes using temporal-difference (TD) methods with linear function approximation for scalability.

1155

Understanding Gradient Clipping in Private SGD: A Geometric Perspective

We first demonstrate how gradient clipping can prevent SGD from converging to a stationary point. We then provide a theoretical analysis on private SGD with gradient clipping.

1156

O(n) Connections are Expressive Enough: Universal Approximability of Sparse Transformers

In this paper, we address these questions and provide a unifying framework that captures existing sparse attention models.

1157

Identifying signal and noise structure in neural population activity with Gaussian process factor models

To learn the parameters of our model, we introduce a Fourier-domain black box variational inference method that quickly identifies smooth latent structure.

1158

Equivariant Networks for Hierarchical Structures

More generally, we show that any equivariant map for the hierarchy has this form.

1159

MinMax Methods for Optimal Transport and Beyond: Regularization, Approximation and Numerics

We study MinMax solution methods for a general class of optimization problems related to (and including) optimal transport.

1160

A Discrete Variational Recurrent Topic Model without the Reparametrization Trick

We show how to learn a neural topic model with discrete random variables—one that explicitly models each word’s assigned topic—using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables.

1161

Transferable Graph Optimizers for ML Compilers

To address these limitations, we propose an end-to-end, transferable deep reinforcement learning method for computational graph optimization (GO), based on a scalable sequential attention mechanism over an inductive graph neural network.

1162

Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces

In this work, we consider operator-valued kernels which might not be necessarily positive definite.

1163

Learning Bounds for Risk-sensitive Learning

In this paper, we propose to study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents (OCE): our general scheme can handle various known risks, e.g., the entropic risk, mean-variance, and conditional value-at-risk, as special cases.

1164

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

We introduce a series of challenging chaotic and extended-body systems, including systems with $N$-pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, to push the limits of current approaches.

1165

Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency

Here we introduce trial-by-trial error consistency, a quantitative analysis for measuring whether two decision making systems systematically make errors on the same inputs.

1166

Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations

To address such a challenge, focusing on the episodic setting where the action-value functions are represented by a kernel function or over-parametrized neural network, we propose the first provable RL algorithm with both polynomial runtime and sample complexity, without additional assumptions on the data-generating model.

1167

Constant-Expansion Suffices for Compressed Sensing with Generative Priors

Our main contribution is to break this strong expansivity assumption, showing that \emph{constant} expansivity suffices to get efficient recovery algorithms, besides it also being information-theoretically necessary.

1168

RANet: Region Attention Network for Semantic Segmentation

In this paper, we introduce the \emph{Region Attention Network} (RANet), a novel attention network for modeling the relationship between object regions.

1169

A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent

This article characterizes the exact asymptotics of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$, their dimension $p$, and the dimension of feature space $N$ are all large and comparable.

1170

Learning sparse codes from compressed representations with biologically plausible local wiring constraints

The main contribution of this paper is to leverage recent results on structured random matrices to propose a theoretical neuroscience model of randomized projections for communication between cortical areas that is consistent with the local wiring constraints observed in neuroanatomy.

1171

Self-Imitation Learning via Generalized Lower Bound Q-learning

In this work, we propose a n-step lower bound which generalizes the original return-based lower-bound Q-learning, and introduce a new family of self-imitation learning algorithms.

1172

Private Learning of Halfspaces: Simplifying the Construction and Reducing the Sample Complexity

We present a differentially private learner for halfspaces over a finite grid $G$ in $\R^d$ with sample complexity $\approx d^{2.5}\cdot 2^{\log^*|G|}$, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a $d^2$ factor.

1173

Directional Pruning of Deep Neural Networks

In the light of the fact that the stochastic gradient descent (SGD) often finds a flat minimum valley in the training loss, we propose a novel directional pruning method which searches for a sparse minimizer in or close to that flat region.

1174

Smoothly Bounding User Contributions in Differential Privacy

For a better trade-off between utility and privacy guarantee, we propose a method which smoothly bounds user contributions by setting appropriate weights on data points and apply it to estimating the mean/quantiles, linear regression, and empirical risk minimization.

1175

Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping

In this work, we propose a method based on progressive layer dropping that speeds the training of Transformer-based language models, not at the cost of excessive hardware resources but from model architecture change and training technique boosted efficiency.

1176

Online Planning with Lookahead Policies

In this we devise a multi-step greedy RTDP algorithm, which we call $h$-RTDP, that replaces the 1-step greedy policy with a $h$-step lookahead policy.

1177

Learning Deep Attribution Priors Based On Prior Knowledge

Here, we propose the deep attribution prior (DAPr) framework to exploit such information to overcome the limitations of attribution methods.

1178

Using noise to probe recurrent neural network structure and prune synapses

Here we suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant.

1179

NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity

Hence, we propose an efficient parameter decomposition method and the concept of flow indication embedding, which are key missing components that enable density estimation from a single neural network.

1180

Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge

We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-IID variants.

1181

Neural FFTs for Universal Texture Image Synthesis

In this work, inspired by the repetitive nature of texture patterns, we find that texture synthesis can be viewed as (local) \textit{upsampling} in the Fast Fourier Transform (FFT) domain.

1182

Graph Cross Networks with Vertex Infomax Pooling

We propose a novel graph cross network (GXN) to achieve comprehensive feature learning from multiple scales of a graph.

1183

Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms

We study and provide instance-optimal algorithms in differential privacy by extending and approximating the inverse sensitivity mechanism.

1184

Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets.

1185

MOPO: Model-based Offline Policy Optimization

In this paper, we observe that an existing model-based RL algorithm on its own already produces significant gains in the offline setting, as compared to model-free approaches, despite not being designed for this setting.

1186

Building powerful and equivariant graph neural networks with structural message-passing

We address this problem and propose a powerful and equivariant message-passing framework based on two ideas: first, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.

1187

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning

In this paper, we propose a practical optimistic exploration algorithm (H-UCRL).

1188

Practical Low-Rank Communication Compression in Decentralized Deep Learning

We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors.

1189

Mutual exclusivity as a challenge for deep neural networks

In this paper, we investigate whether or not vanilla neural architectures have an ME bias, demonstrating that they lack this learning assumption.

1190

3D Shape Reconstruction from Vision and Touch

In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch information.

1191

GradAug: A New Regularization Method for Deep Neural Networks

We propose a new regularization method to alleviate over-fitting in deep neural networks.

1192

An Equivalence between Loss Functions and Non-Uniform Sampling in Experience Replay

We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function with the same expected gradient.

1193

Learning Utilities and Equilibria in Non-Truthful Auctions

We give almost matching (up to polylog factors) lower bound on the sample complexity for learning utilities.

1194

Rational neural networks

We consider neural networks with rational activation functions.

1195

DISK: Learning local features with policy gradient

We introduce DISK (DIScrete Keypoints), a novel method that overcomes these obstacles by leveraging principles from Reinforcement Learning (RL), optimizing end-to-end for a high number of correct feature matches.

1196

Transfer Learning via $\ell_1$ Regularization

We propose a method for transferring knowledge from a source domain to a target domain via $\ell_1$ regularization in high dimension.

1197

GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network

We propose GOCor, a fully differentiable dense matching module, acting as a direct replacement to the feature correlation layer.

1198

Deep Inverse Q-learning with Constraints

In this work, we introduce a novel class of algorithms that only needs to solve the MDP underlying the demonstrated behavior once to recover the expert policy.

1199

Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities

In this paper, we propose {\em optimistic dual extrapolation (OptDE)}, a method that only performs {\em one} gradient evaluation per iteration.

1200

Prediction with Corrupted Expert Advice

We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption.

1201

Human Parsing Based Texture Transfer from Single Image to 3D Human via Cross-View Consistency

This paper proposes a human parsing based texture transfer model via cross-view consistency learning to generate the texture of 3D human body from a single image.

1202

Knowledge Augmented Deep Neural Networks for Joint Facial Expression and Action Unit Recognition

This paper proposes to systematically capture their dependencies and incorporate them into a deep learning framework for joint facial expression recognition and action unit detection.

1203

Point process models for sequence detection in high-dimensional neural spike trains

We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time.

1204

Adversarial Attacks on Linear Contextual Bandits

In this paper, we study several attack scenarios and show that a malicious agent can force a linear contextual bandit algorithm to pull any desired arm T ? o(T) times over a horizon of T steps, while applying adversarial modifications to either rewards or contexts with a cumulative cost that only grow logarithmically as O(log T).

1205

Meta-Consolidation for Continual Learning

In this work, we present a novel methodology for continual learning called MERLIN: Meta-Consolidation for Continual Learning.

1206

Organizing recurrent network dynamics by task-computation to enable continual learning

Here, we develop a novel learning rule designed to minimize interference between sequentially learned tasks in recurrent networks.

1207

Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting

We provide a novel method for lifelong policy gradient learning that trains lifelong function approximators directly via policy gradients, allowing the agent to benefit from accumulated knowledge throughout the entire training process.

1208

Kernel Methods Through the Roof: Handling Billions of Points Efficiently

Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization.

1209

Spike and slab variational Bayes for high dimensional logistic regression

We study a mean-field spike and slab VB approximation of widely used Bayesian model selection priors in sparse high-dimensional logistic regression.

1210

Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness

In this paper, we propose a novel and effective regularization term for adversarial data augmentation.

1211

Fast geometric learning with symbolic matrices

We present an extension for standard machine learning frameworks that provides comprehensive support for this abstraction on CPUs and GPUs: our toolbox combines a versatile, transparent user interface with fast runtimes and low memory usage.

1212

MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler

In this paper, we introduce a novel ensemble IL framework named MESA.

1213

CoinPress: Practical Private Mean and Covariance Estimation

We present simple differentially private estimators for the parameters of multivariate sub-Gaussian data that are accurate at small sample sizes.

1214

Planning with General Objective Functions: Going Beyond Total Rewards

In this paper, based on techniques in sketching algorithms, we propose a novel planning algorithm in deterministic systems which deals with a large class of objective functions of the form $f(r_1, r_2, … r_H)$ that are of interest to practical applications.

1215

Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks

Here, we propose to augment conventional GCNs with geometric scattering transforms and residual convolutions.

1216

KFC: A Scalable Approximation Algorithm for $k$-center Fair Clustering

In this paper, we study the problem of fair clustering on the $k-$center objective.

1217

Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms

To address this question, we introduce a gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), and analyze its performance by dynamic regrets in terms of the temporal variation of the environment and the prediction errors.

1218

Learning the Linear Quadratic Regulator from Nonlinear Observations

We introduce a new algorithm, RichID, which learns a near-optimal policy for the RichLQR with sample complexity scaling only with the dimension of the latent state space and the capacity of the decoder function class.

1219

Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate

The current paper highlights other ways in which behavior of normalized nets departs from traditional viewpoints, and then initiates a formal framework for studying their mathematics via suitable adaptation of the conventional framework namely, modeling SGD-induced training trajectory via a suitable stochastic differential equation (SDE) with a noise term that captures gradient noise.

1220

Scalable Graph Neural Networks via Bidirectional Propagation

In this paper, we present GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vector and the training/testing nodes.

1221

Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning

To alleviate this issue, we formulate a convex optimization problem to softly refine the pseudo-labels generated from the biased model, and develop a simple algorithm, named Distribution Aligning Refinery of Pseudo-label (DARP) that solves it provably and efficiently.

1222

Assisted Learning: A Framework for Multi-Organization Learning

In this work, we introduce the Assisted Learning framework for organizations to assist each other in supervised learning tasks without revealing any organization’s algorithm, data, or even task.

1223

The Strong Screening Rule for SLOPE

We develop a screening rule for SLOPE by examining its subdifferential and show that this rule is a generalization of the strong rule for the lasso.

1224

STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

In this paper, we develop a new temporal logic-based learning framework, STLnet, which guides the RNN learning process with auxiliary knowledge of model properties, and produces a more robust model for improved future predictions.

1225

Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks

This paper proposes Election Coding, a coding-theoretic framework to guarantee Byzantine-robustness for distributed learning algorithms based on signed stochastic gradient descent (SignSGD) that minimizes the worker-master communication load.

1226

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner.

1227

Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples

We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost.

1228

Black-Box Optimization with Local Generative Surrogates

We propose a novel method for gradient-based optimization of black-box simulators using differentiable local surrogate models.

1229

Efficient Generation of Structured Objects with Constrained Adversarial Networks

As a remedy, we propose Constrained Adversarial Networks (CANs), an extension of GANs in which the constraints are embedded into the model during training.

1230

Hard Example Generation by Texture Synthesis for Cross-domain Shape Similarity Learning

In the paper, we identify the source of the poor performance and propose a practical solution to this problem.

1231

Recovery of sparse linear classifiers from mixture of responses

We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse.

1232

Efficient Distance Approximation for Structured High-Dimensional Distributions via Learning

Specifically, we present algorithms for the following problems (where dTV is the total variation distance): Given sample access to two Bayesian networks P1 and P2 over known directed acyclic graphs G1 and G2 having n nodes and bounded in-degree, approximate dTV(P1, P2) to within additive error ? using poly(n, 1/?) samples and time.

1233

A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints

In this paper, we consider an online optimization problem in which the reward functions are DR-submodular, and in addition to maximizing the total reward, the sequence of decisions must satisfy some convex constraints on average.

1234

Learning Sparse Prototypes for Text Generation

In this paper, we propose a novel generative model that automatically learns a sparse prototype support set that, nonetheless, achieves strong language modeling performance.

1235

Implicit Rank-Minimizing Autoencoder

In this work, the rank of the covariance matrix of the codes is implicitly minimized by relying on the fact that gradient descent learning in multi-layer linear networks leads to minimum-rank solutions.

1236

Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning

In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs).

1237

Task-Oriented Feature Distillation

In this paper, we propose a novel distillation method named task-oriented feature distillation (TOFD) where the transformation is convolutional layers that are trained in a data-driven manner by task loss.

1238

Entropic Causal Inference: Identifiability and Finite Sample Results

In this paper, we prove a variant of their conjecture. Namely, we show that for almost all causal models where the exogenous variable has entropy that does not scale with the number of states of the observed variables, the causal direction is identifiable from observational data.

1239

Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

In this paper we show that inverse RL is a principled mechanism for reusing experience across tasks.

1240

Variance-Reduced Off-Policy TDC Learning: Non-Asymptotic Convergence Analysis

In this work, we develop a variance reduction scheme for the two time-scale TDC algorithm in the off-policy setting and analyze its non-asymptotic convergence rate over both i.i.d.\ and Markovian samples.

1241

AdaTune: Adaptive Tensor Program Compilation Made Efficient

In this paper, we present a new method, called AdaTune, that significantly reduces the optimization time of tensor programs for high-performance deep learning inference.

1242

When Do Neural Networks Outperform Kernel Methods?

Building on these results, we present the spiked covariates model that can capture in a unified framework both behaviors observed in earlier works.

1243

STEER : Simple Temporal Regularization For Neural ODE

In this paper we propose a new regularization technique: randomly sampling the end time of the ODE during training.

1244

A Variational Approach for Learning from Positive and Unlabeled Data

In this paper, we introduce a variational principle for PU learning that allows us to quantitatively evaluate the modeling error of the Bayesian classi?er directly from given data.

1245

Efficient Clustering Based On A Unified View Of $K$-means And Ratio-cut

Firstly, a unified framework of k-means and ratio-cut is revisited, and a novel and efficient clustering algorithm is then proposed based on this framework.

1246

Recurrent Switching Dynamical Systems Models for Multiple Interacting Neural Populations

To tackle this challenge, we develop recurrent switching linear dynamical systems models for multiple populations.

1247

Coresets via Bilevel Optimization for Continual Learning and Streaming

In this work, we propose a novel coreset construction via cardinality-constrained bilevel optimization.

1248

Generalized Independent Noise Condition for Estimating Latent Variable Causal Graphs

To this end, in this paper, we consider Linear, Non-Gaussian Latent variable Models (LiNGLaMs), in which latent confounders are also causally related, and propose a Generalized Independent Noise (GIN) condition to estimate such latent variable graphs.

1249

Understanding and Exploring the Network with Stochastic Architectures

In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.

1250

All-or-nothing statistical and computational phase transitions in sparse spiked matrix estimation

We prove explicit low-dimensional variational formulas for the asymptotic mutual information between the spike and the observed noisy matrix and analyze the approximate message passing algorithm in the sparse regime.

1251

Deep Evidential Regression

In this paper, we propose a novel method for training non-Bayesian NNs to estimate a continuous target as well as its associated evidence in order to learn both aleatoric and epistemic uncertainty.

1252

Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks

We exploit the Continuous Piecewise Affine property of modern DGNs to derive their posterior and marginal distributions as well as the latter’s first two moments.

1253

Bayesian Pseudocoresets

We address both of these issues with a single unified solution, Bayesian pseudocoresets — a small weighted collection of synthetic "pseudodata"—along with a variational optimization method to select both pseudodata and weights.

1254

See, Hear, Explore: Curiosity via Audio-Visual Association

In this paper, we introduce an alternative form of curiosity that rewards novel associations between different senses.

1255

Adversarial Training is a Form of Data-dependent Operator Norm Regularization

We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks.

1256

A Biologically Plausible Neural Network for Slow Feature Analysis

In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation.

1257

Learning Feature Sparse Principal Subspace

This paper presents new algorithms to solve the feature-sparsity constrained PCA problem (FSPCA), which performs feature selection and PCA simultaneously.

1258

Online Adaptation for Consistent Mesh Reconstruction in the Wild

This paper presents an algorithm to reconstruct temporally consistent 3D meshes of deformable object instances from videos in the wild.

1259

Online learning with dynamics: A minimax perspective

We consider the problem of online learning with dynamics, where a learner interacts with a stateful environment over multiple rounds.

1260

Learning to Select Best Forecast Tasks for Clinical Outcome Prediction

To address this challenge, we propose a method to automatically select from a large set of auxiliary tasks which yield a representation most useful to the target task.

1261

Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping

In this paper, we propose a new accelerated stochastic first-order method called clipped-SSTM for smooth convex stochastic optimization with heavy-tailed distributed noise in stochastic gradients and derive the first high-probability complexity bounds for this method closing the gap in the theory of stochastic optimization with heavy-tailed noise.

1262

Adaptive Experimental Design with Temporal Interference: A Maximum Likelihood Approach

Remarkably, in our setting, using a novel application of classical martingale analysis of Markov chains via Poisson’s equation, we characterize efficient designs via a succinct convex optimization problem.

1263

From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering

In this work, we provide the first continuous relaxation of Dasgupta’s discrete optimization problem with provable quality guarantees.

1264

The Autoencoding Variational Autoencoder

Does a Variational AutoEncoder (VAE) consistently encode typical samples generated from its decoder? This paper shows that the perhaps surprising answer to this question is `No’; a (nominally trained) VAE does not necessarily amortize inference for typical samples that it is capable of generating.

1265

A Fair Classifier Using Kernel Density Estimation

In this work, we develop a kernel density estimation trick to quantify fairness measures that capture the degree of the irrelevancy.

1266

A Randomized Algorithm to Reduce the Support of Discrete Measures

We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by “greedy geometric sampling”.

1267

Distributionally Robust Federated Averaging

In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling.

1268

Sharp uniform convergence bounds through empirical centralization

We introduce the use of empirical centralization to derive novel practical, probabilistic, sample-dependent bounds to the Supremum Deviation (SD) of empirical means of functions in a family from their expectations.

1269

COBE: Contextualized Object Embeddings from Narrated Instructional Video

Instead of relying on manually-labeled data for this task, we propose a new framework for learning Contextualized OBject Embeddings (COBE) from automatically-transcribed narrations of instructional videos.

1270

Knowledge Transfer in Multi-Task Deep Reinforcement Learning for Continuous Control

In this paper, we present a Knowledge Transfer based Multi-task Deep Reinforcement Learning framework (KTM-DRL) for continuous control, which enables a single DRL agent to achieve expert-level performance in multiple different tasks by learning from task-specific teachers.

1271

Finite Versus Infinite Neural Networks: an Empirical Study

We perform a careful, thorough, and large scale empirical study of the correspondence between wide neural networks and kernel methods.

1272

Supermasks in Superposition

We present the Supermasks in Superposition (SupSup) model, capable of sequentially learning thousands of tasks without catastrophic forgetting.

1273

Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative Priors

In this work, we study an alternative prior where the low-rank component is in the range of a trained generative network.

1274

Almost Optimal Model-Free Reinforcement Learningvia Reference-Advantage Decomposition

We propose a model-free algorithm UCB-ADVANTAGE and prove that it achieves \tilde{O}(\sqrt{H^2 SAT}) regret where T=KH and K is the number of episodes to play.

1275

Learning to Incentivize Other Learning Agents

Observing that humans often provide incentives to influence others’ behavior, we propose to equip each RL agent in a multi-agent environment with the ability to give rewards directly to other agents, using a learned incentive function.

1276

Displacement-Invariant Matching Cost Learning for Accurate Optical Flow Estimation

This paper proposes a novel solution that is able to bypass the requirement of building a 5D feature volume while still allowing the network to learn suitable matching costs from data.

1277

Distributionally Robust Local Non-parametric Conditional Estimation

To alleviate these issues, we propose a new distributionally robust estimator that generates non-parametric local estimates by minimizing the worst-case conditional expected loss over all adversarial distributions in a Wasserstein ambiguity set.

1278

Robust Multi-Object Matching via Iterative Reweighting of the Graph Connection Laplacian

We propose an efficient and robust iterative solution to the multi-object matching problem.

1279

Meta-Gradient Reinforcement Learning with an Objective Discovered Online

In this work, we propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network, solely from interactive experience with its environment.

1280

Learning Strategy-Aware Linear Classifiers

We address the question of repeatedly learning linear classifiers against agents who are \emph{strategically} trying to \emph{game} the deployed classifiers, and we use the \emph{Stackelberg regret} to measure the performance of our algorithms.

1281

Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss

In this work, we propose a new \emph{upper confidence primal-dual} algorithm, which only requires the trajectories sampled from the transition model.

1282

Calibrating Deep Neural Networks using Focal Loss

We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to justify the empirically excellent performance of focal loss.

1283

Optimizing Mode Connectivity via Neuron Alignment

We propose a more general framework to investigate the effect of symmetry on landscape connectivity by accounting for the weight permutations of the networks being connected.

1284

Information Theoretic Regret Bounds for Online Nonlinear Control

This work studies the problem of sequential control in an unknown, nonlinear dynamical system, where we model the underlying system dynamics as an unknown function in a known Reproducing Kernel Hilbert Space.

1285

A kernel test for quasi-independence

In this paper, we propose a nonparametric statistical test of quasi-independence.

1286

First Order Constrained Optimization in Policy Space

We propose a novel approach called First Order Constrained Optimization in Policy Space (FOCOPS) which maximizes an agent’s overall reward while ensuring the agent satisfies a set of cost constraints.

1287

Learning Augmented Energy Minimization via Speed Scaling

Inspired by recent work on learning-augmented online algorithms, we propose an algorithm which incorporates predictions in a black-box manner and outperforms any online algorithm if the accuracy is high, yet maintains provable guarantees if the prediction is very inaccurate.

1288

Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning

In this work we measure fairness according to demographic parity.

1289

Deep Rao-Blackwellised Particle Filters for Time Series Forecasting

We propose a Monte Carlo objective that leverages the conditional linearity by computing the corresponding conditional expectations in closed-form and a suitable proposal distribution that is factorised similarly to the optimal proposal distribution.

1290

Why are Adaptive Methods Good for Attention Models?

In this paper, we provide empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is one cause of SGD’s poor performance.

1291

Neural Sparse Representation for Image Restoration

Inspired by the robustness and efficiency of sparse representation in sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.

1292

Boosting First-Order Methods by Shifting Objective: New Schemes with Faster Worst-Case Rates

We propose a new methodology to design first-order methods for unconstrained strongly convex problems.

1293

Robust Sequence Submodular Maximization

In this paper, we study a new problem of robust sequence submodular maximization with cardinality constraints.

1294

Certified Monotonic Neural Networks

In this work, we propose to certify the monotonicity of the general piece-wise linear neural networks by solving a mixed integer linear programming problem.

1295

System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina

Here, we present a computational model of temporal processing in the inner retina, including inhibitory feedback circuits and realistic synaptic release mechanisms.

1296

Efficient Algorithms for Device Placement of DNN Graph Operators

In this paper, we identify and isolate the structured optimization problem at the core of device placement of DNN operators, for both inference and training, especially in modern pipelined settings.

1297

Active Invariant Causal Prediction: Experiment Selection through Stability

In this work we propose a new active learning (i.e. experiment selection) framework (A-ICP) based on Invariant Causal Prediction (ICP) (Peters et al. 2016).

1298

BOSS: Bayesian Optimization over String Spaces

This article develops a Bayesian optimization (BO) method which acts directly over raw strings, proposing the first uses of string kernels and genetic algorithms within BO loops.

1299

Model Interpretability through the lens of Computational Complexity

We make a step towards such a theory by studying whether folklore interpretability claims have a correlate in terms of computational complexity theory.

1300

Markovian Score Climbing: Variational Inference with KL(p||q)

This paper develops a simple algorithm for reliably minimizing the inclusive KL using stochastic gradients with vanishing bias.

1301

Improved Analysis of Clipping Algorithms for Non-convex Optimization

In this paper, we bridge the gap by presenting a general framework to study the clipping algorithms, which also takes momentum methods into consideration.

1302

Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs

We develop a new approach to obtaining high probability regret bounds for online learning with bandit feedback against an adaptive adversary.

1303

A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection

We propose average Localisation-Recall-Precision (aLRP), a unified, bounded, balanced and ranking-based loss function for both classification and localisation tasks in object detection.

1304

StratLearner: Learning a Strategy for Misinformation Prevention in Social Networks

In this paper, we consider such a setting and study the misinformation prevention problem.

1305

A Unified Switching System Perspective and Convergence Analysis of Q-Learning Algorithms

This paper develops a novel and unified framework to analyze the convergence of a large family of Q-learning algorithms from the switching system perspective.

1306

Kernel Alignment Risk Estimator: Risk Prediction from Training Data

We study the risk (i.e. generalization error) of Kernel Ridge Regression (KRR) for a kernel K with ridge ?>0 and i.i.d. observations.

1307

Calibrating CNNs for Lifelong Learning

We present an approach for lifelong/continual learning of convolutional neural networks (CNN) that does not suffer from the problem of catastrophic forgetting when moving from one task to the other.

1308

Online Convex Optimization Over Erdos-Renyi Random Networks

The work studies how node-to-node communications over an Erd\H{o}s-R\’enyi random network influence distributed online convex optimization, which is vital in solving large-scale machine learning in antagonistic or changing environments.

1309

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

In this paper, we analyse the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs).

1310

Parametric Instance Classification for Unsupervised Visual Feature learning

This paper presents parametric instance classification (PIC) for unsupervised visual feature learning.

1311

Sparse Weight Activation Training

In this work, we propose a novel CNN training algorithm called Sparse Weight Activation Training (SWAT).

1312

Collapsing Bandits and Their Application to Public Health Intervention

We propose and study Collapsing Bandits, a new restless multi-armed bandit (RMAB) setting in which each arm follows a binary-state Markovian process with a special structure: when an arm is played, the state is fully observed, thus“collapsing” any uncertainty, but when an arm is passive, no observation is made, thus allowing uncertainty to evolve.

1313

Neural Sparse Voxel Fields

In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering.

1314

A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding

We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems, and whose architectures are derived from an optimization algorithm.

1315

The Discrete Gaussian for Differential Privacy

With these shortcomings in mind, we introduce and analyze the discrete Gaussian in the context of differential privacy.

1316

Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing

We develop two methods for the following fundamental statistical task: given an $\eps$-corrupted set of $n$ samples from a $d$-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix.

1317

Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes

In this work, we build on this framework and propose a simple and efficient algorithm for adaptive importance sampling for finite-sum optimization and sampling with decreasing step-sizes.

1318

Learning efficient task-dependent representations with synaptic plasticity

Here we construct a stochastic recurrent neural circuit model that can learn efficient, task-specific sensory codes using a novel form of reward-modulated Hebbian synaptic plasticity.

1319

A Contour Stochastic Gradient Langevin Dynamics Algorithm for Simulations of Multi-modal Distributions

We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called contour stochastic gradient Langevin dynamics (CSGLD), for Bayesian learning in big data statistics.

1320

Error Bounds of Imitating Policies and Environments

In this paper, we firstly analyze the value gap between the expert policy and imitated policies by two imitation methods, behavioral cloning and generative adversarial imitation.

1321

Disentangling Human Error from Ground Truth in Segmentation of Medical Images

In this work, we present a method for jointly learning, from purely noisy observations alone, the reliability of individual annotators and the true segmentation label distributions, using two coupled CNNs.

1322

Consequences of Misaligned AI

The contributions of our paper are as follows: 1) we propose a novel model of an incomplete principal—agent problem from artificial intelligence; 2) we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility; and 3) we show how modifying the setup to allow reward functions that reference the full state or allowing the principal to update the proxy objective over time can lead to higher utility solutions.

1323

Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement Learning

We propose two policy regularization methods: TeamReg, which is based on inter-agent action predictability and CoachReg that relies on synchronized behavior selection.

1324

Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences

In this work, we show evidence of emergent direct reciprocity, indirect reciprocity and reputation, and team formation when training agents with randomized uncertain social preferences (RUSP), a novel environment augmentation that expands the distribution of environments agents play in.

1325

Hitting the High Notes: Subset Selection for Maximizing Expected Order Statistics

We consider the fundamental problem of selecting $k$ out of $n$ random variables in a way that the expected highest or second-highest value is maximized.

1326

Towards Scale-Invariant Graph-related Problem Solving by Iterative Homogeneous GNNs

Taking the perspective of synthesizing graph theory programs, we propose several extensions to address the issue.

1327

Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

In this work, we consider OCO for relative Lipschitz and relative strongly convex functions.

1328

The Lottery Ticket Hypothesis for Pre-trained BERT Networks

In this work, we combine these observations to assess whether such trainable, transferrable subnetworks exist in pre-trained BERT models.

1329

Label-Aware Neural Tangent Kernel: Toward Better Generalization and Local Elasticity

In this paper, we introduce a novel approach from the perspective of \emph{label-awareness} to reduce this gap for the NTK.

1330

Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

We present a transductive learning algorithm that takes as input training examples from a distribution P and arbitrary (unlabeled) test examples, possibly chosen by an adversary.

1331

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image.

1332

Few-shot Image Generation with Elastic Weight Consolidation

Crucially, we regularize the changes of the weights during this adaptation, in order to best preserve the information of the source dataset, while fitting the target.

1333

On the Expressiveness of Approximate Inference in Bayesian Neural Networks

We study the quality of common variational methods in approximating the Bayesian predictive distribution.

1334

Non-Crossing Quantile Regression for Distributional Reinforcement Learning

To address these issues, we introduce a general DRL framework by using non-crossing quantile regression to ensure the monotonicity constraint within each sampled batch, which can be incorporated with any well-known DRL algorithm.

1335

Dark Experience for General Continual Learning: a Strong, Simple Baseline

We address it through mixing rehearsal with knowledge distillation and regularization; our simple baseline, Dark Experience Replay, matches the network’s logits sampled throughout the optimization trajectory, thus promoting consistency with its past.

1336

Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping

In this paper, we consider the problem of adaptively utilizing a given shaping reward function.

1337

Neural encoding with visual attention

Using concurrent eye-tracking and functional Magnetic Resonance Imaging (fMRI) recordings from a large cohort of human subjects watching movies, we first demonstrate that leveraging gaze information, in the form of attentional masking, can significantly improve brain response prediction accuracy in a neural encoding model.

1338

On the linearity of large non-linear models: when and why the tangent kernel is constant

The goal of this work is to shed light on the remarkable phenomenon of "transition to linearity" of certain neural networks as their width approaches infinity.

1339

PLLay: Efficient Topological Layer based on Persistent Landscapes

In this work, we show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration.

1340

Decentralized Langevin Dynamics for Bayesian Learning

Motivated by decentralized approaches to machine learning, we propose a collaborative Bayesian learning algorithm taking the form of decentralized Langevin dynamics in a non-convex setting.

1341

Shared Space Transfer Learning for analyzing multi-site fMRI data

This paper proposes the Shared Space Transfer Learning (SSTL) as a novel transfer learning (TL) approach that can functionally align homogeneous multi-site fMRI datasets, and so improve the prediction performance in every site.

1342

The Diversified Ensemble Neural Network

In this paper, we propose a principled ensemble technique by constructing the so-called diversified ensemble layer to combine multiple networks as individual modules.

1343

Inductive Quantum Embedding

We start by reformulating the original QE problem to allow for the induction. On the way, we also underscore some interesting analytic and geometric properties of the solution and leverage them to design a faster training scheme.

1344

Variational Bayesian Unlearning

This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased.

1345

Batched Coarse Ranking in Multi-Armed Bandits

We study both the fixed budget and fixed confidence variants in MAB, and propose algorithms and prove impossibility results which together give almost tight tradeoffs between the total number of arms pulls and the number of policy changes.

1346

Understanding and Improving Fast Adversarial Training

Based on this observation, we propose a new regularization method, GradAlign, that prevents catastrophic overfitting by explicitly maximizing the gradient alignment inside the perturbation set and improves the quality of the FGSM solution.

1347

Coded Sequential Matrix Multiplication For Straggler Mitigation

In this work, we consider a sequence of $J$ matrix multiplication jobs which needs to be distributed by a master across multiple worker nodes.

1348

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this work, we provide evidence to the contrary.

1349

Certifiably Adversarially Robust Detection of Out-of-Distribution Data

In this paper, we are aiming for certifiable worst case guarantees for OOD detection by enforcing not only low confidence at the OOD point but also in an $l_\infty$-ball around it.

1350

Domain Generalization via Entropy Regularization

To ensure the conditional invariance of learned features, we propose an entropy regularization term that measures the dependency between the learned features and the class labels.

1351

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels

Following the recognition that meta-learning is implementing learning in a multi-level model, we present a Bayesian treatment for the meta-learning inner loop through the use of deep kernels.

1352

Skeleton-bridged Point Completion: From Global Inference to Local Adjustment

To this end, we propose a skeleton-bridged point completion network (SK-PCN) for shape completion.

1353

Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding

As an alternative, we propose a novel method, Relative Entropy Coding (REC), that can directly encode the latent representation with codelength close to the relative entropy for single images, supported by our empirical results obtained on the Cifar10, ImageNet32 and Kodak datasets.

1354

Improved Guarantees for k-means++ and k-means++ Parallel

In this paper, we study k-means++ and k-means||, the two most popular algorithms for the classic k-means clustering problem.

1355

Sparse Spectrum Warped Input Measures for Nonstationary Kernel Learning

We establish a general form of explicit, input-dependent, measure-valued warpings for learning nonstationary kernels.

1356

An Efficient Adversarial Attack for Tree Ensembles

We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs).

1357

Learning Continuous System Dynamics from Irregularly-Sampled Partial Observations

To tackle the above challenge, we present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamic system with known graph structure.

1358

Online Bayesian Persuasion

In this paper, we relax this assumption through an online learning framework in which the sender faces a receiver with unknown type.

1359

Robust Pre-Training by Adversarial Contrastive Learning

Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations.

1360

Random Walk Graph Neural Networks

In this paper, we propose a more intuitive and transparent architecture for graph-structured data, so-called Random Walk Graph Neural Network (RWNN).

1361

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling

To overcome this failure, we investigate a double stepsize extragradient algorithm where the exploration step evolves at a more aggressive time-scale compared to the update step.

1362

Fast and Accurate $k$-means++ via Rejection Sampling

In this paper, we present such a near linear time algorithm for $k$-means++ seeding.

1363

Variational Amodal Object Completion

In this paper, we propose a variational generative framework for amodal completion, referred to as AMODAL-VAE, which does not require any amodal labels at training time, as it is able to utilize widely available object instance masks.

1364

When Counterpoint Meets Chinese Folk Melodies

In this paper, we propose a reinforcement learning-based system, named FolkDuet, towards the online countermelody generation for Chinese folk melodies.

1365

Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search Spaces

To this end, we propose a novel BO algorithm which expands (and shifts) the search space over iterations based on controlling the expansion rate thought a \emph{hyperharmonic series}.

1366

Universal Domain Adaptation through Self Supervision

We propose a more universally applicable domain adaptation approach that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE).

1367

Patch2Self: Denoising Diffusion MRI with Self-Supervised Learning?

We introduce a self-supervised learning method for denoising DWI data, Patch2Self, which uses the entire volume to learn a full-rank locally linear denoiser for that volume.

1368

Stochastic Normalization

In this paper, we take an alternative approach by refactoring the widely used Batch Normalization (BN) module to mitigate over-fitting.

1369

Constrained episodic reinforcement learning in concave-convex and knapsack settings

We propose an algorithm for tabular episodic reinforcement learning with constraints.

1370

On Learning Ising Models under Huber's Contamination Model

In such a setup, we aim to design statistically optimal estimators in a high-dimensional scaling in which the number of nodes p, the number of edges k and the maximal node degree d are allowed to increase to infinity as a function of the sample size n.

1371

Cross-validation Confidence Intervals for Test Error

This work develops central limit theorems for cross-validation and consistent estimators of the asymptotic variance under weak stability conditions on the learning algorithm.

1372

DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation

In this work, we propose a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and interpolation.

1373

Bayesian Attention Modules

In this paper, we propose a scalable stochastic version of attention that is easy to implement and optimize.

1374

Robustness Analysis of Non-Convex Stochastic Gradient Descent using Biased Expectations

This work proposes a novel analysis of stochastic gradient descent (SGD) for non-convex and smooth optimization.

1375

SoftFlow: Probabilistic Framework for Normalizing Flow on Manifolds

In this paper, we propose SoftFlow, a probabilistic framework for training normalizing flows on manifolds.

1376

A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network

Here, we present an alternative approach that uses meta-learning to discover plausible synaptic plasticity rules.

1377

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough

This paper provides one answer to this question by proposing a greedy optimization based pruning method.

1378

Path Integral Based Convolution and Pooling for Graph Neural Networks

Borrowing ideas from physics, we propose a path integral based graph neural networks (PAN) for classification and regression tasks on graphs.

1379

Estimating the Effects of Continuous-valued Interventions using Generative Adversarial Networks

In this paper, we tackle this problem by building on a modification of the generative adversarial networks (GANs) framework.

1380

Latent Dynamic Factor Analysis of High-Dimensional Neural Recordings

We designed and implemented a novel method, Latent Dynamic Factor Analysis of High-dimensional time series (LDFA-H), which combines (a) a new approach to estimating the covariance structure among high-dimensional time series (for the observed variables) and (b) a new extension of probabilistic CCA to dynamic time series (for the latent variables).

1381

Conditioning and Processing: Techniques to Improve Information-Theoretic Generalization Bounds

In this paper, a probabilistic graphical representation of this approach is adopted and two general techniques to improve the bounds are introduced, namely conditioning and processing.

1382

Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning

Inspired by the original one hundred BPs, we propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.

1383

GAN Memory with No Forgetting

Motivated by that, we propose a GAN memory for lifelong learning, which is capable of remembering a stream of datasets via generative processes, with \emph{no} forgetting.

1384

Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games

In this work, we aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.

1385

Gaussian Gated Linear Networks

We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks.

1386

Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding

To cope with this problem, we propose a novel Meta Transformed Network Embedding framework (MetaTNE), which consists of three modules: (1) A \emph{structural module} provides each node a latent representation according to the graph structure.

1387

Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning

We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario.

1388

Convex optimization based on global lower second-order models

In this work, we present new second-order algorithms for composite convex optimization, called Contracting-domain Newton methods.

1389

Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition

Analyzing such a regularizer and deriving a particular self-bounding regret guarantee is our key technical contribution and might be of independent interest.

1390

Relative gradient optimization of the Jacobian term in unsupervised deep learning

Deep density models have been widely used for this task, but their maximum likelihood based training requires estimating the log-determinant of the Jacobian and is computationally expensive, thus imposing a trade-off between computation and expressive power. In this work, we propose a new approach for exact training of such neural networks.

1391

Self-Supervised Visual Representation Learning from Hierarchical Grouping

We create a framework for bootstrapping visual representation learning from a primitive visual grouping capability.

1392

Optimal Variance Control of the Score-Function Gradient Estimator for Importance-Weighted Bounds

This paper introduces novel results for the score-function gradient estimator of the importance-weighted variational bound (IWAE).

1393

Explicit Regularisation in Gaussian Noise Injections

We study the regularisation induced in neural networks by Gaussian noise injections (GNIs).

1394

Numerically Solving Parametric Families of High-Dimensional Kolmogorov Partial Differential Equations via Deep Learning

We present a deep learning algorithm for the numerical solution of parametric families of high-dimensional linear Kolmogorov partial differential equations (PDEs).

1395

Finite-Time Analysis for Double Q-learning

In this paper, we provide the first non-asymptotic (i.e., finite-time) analysis for double Q-learning.

1396

Learning to Detect Objects with a 1 Megapixel Event Camera

The main reasons for this performance gap are: the lower spatial resolution of event sensors, compared to frame cameras; the lack of large-scale training datasets; the absence of well established deep learning architectures for event-based processing. In this paper, we address all these problems in the context of an event-based object detection task.

1397

End-to-End Learning and Intervention in Games

In this paper, we provide a unified framework for learning and intervention in games.

1398

Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms

Instead, we propose an algorithm based on experience replay–a popular reinforcement learning technique–that achieves a significantly better error rate.

1399

Predictive coding in balanced neural networks with noise, chaos and delays

To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean-field theory of coding accuracy.

1400

Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs

We propose a simple interpolation-based method for the efficient approximation of gradients in neural ODE models.

1401

On the Equivalence between Online and Private Learnability beyond Binary Classification

We investigate whether this equivalence extends to multi-class classification and regression.

1402

AViD Dataset: Anonymized Videos from Diverse Countries

We introduce a new public video dataset for action recognition: Anonymized Videos from Diverse countries (AViD).

1403

Probably Approximately Correct Constrained Learning

To tackle these problems, we develop a generalization theory of constrained learning based on the probably approximately correct (PAC) learning framework.

1404

RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning

In this paper we take a systematic look at continual learning of LSTM-based models for image captioning.

1405

Decisions, Counterfactual Explanations and Strategic Behavior

In this paper, our goal is to find policies and counterfactual explanations that are optimal in terms of utility in such a strategic setting.

1406

Hierarchical Patch VAE-GAN: Generating Diverse Videos from a Single Sample

We introduce a novel patch-based variational autoencoder (VAE) which allows for a much greater diversity in generation.

1407

A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained Optimization

In this paper, we study a new model consisting of a general convex or nonconvex objectives and a variety of continuous nonconvex sparsity-inducing constraints.

1408

Reservoir Computing meets Recurrent Kernels and Structured Transforms

Our contributions are threefold: a) We rigorously establish the recurrent kernel limit of Reservoir Computing and prove its convergence. b) We test our models on chaotic time series prediction, a classic but challenging benchmark in Reservoir Computing, and show how the Recurrent Kernel is competitive and computationally efficient when the number of data points remains moderate. c) When the number of samples is too large, we leverage the success of structured Random Features for kernel approximation by introducing Structured Reservoir Computing.

1409

Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection

To address the above issues, we propose a Comprehensive Attention Self-Distillation (CASD) training approach for WSOD.

1410

Linear Dynamical Systems as a Core Computational Primitive

Running nonlinear RNNs for T steps takes O(T) time. Our construction, called LDStack, approximately runs them in O(log T) parallel time, and obtains arbitrarily low error via repetition.

1411

Ratio Trace Formulation of Wasserstein Discriminant Analysis

We reformulate the Wasserstein Discriminant Analysis (WDA) as a ratio trace problem and present an eigensolver-based algorithm to compute the discriminative subspace of WDA.

1412

PAC-Bayes Analysis Beyond the Usual Bounds

Specifically, we present a basic PAC-Bayes inequality for stochastic kernels, from which one may derive extensions of various known PAC-Bayes bounds as well as novel bounds.

1413

Few-shot Visual Reasoning with Meta-Analogical Contrastive Learning

In this work, we propose to solve such a few-shot (or low-shot) abstract visual reasoning problem by resorting to \emph{analogical reasoning}, which is a unique human ability to identify structural or relational similarity between two sets.

1414

MPNet: Masked and Permuted Pre-training for Language Understanding

In this paper, we propose MPNet, a novel pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations.

1415

Reinforcement Learning with Feedback Graphs

We study RL in the tabular MDP setting where the agent receives additional observations per step in the form of transitions samples.

1416

Zap Q-Learning With Nonlinear Function Approximation

This paper introduces a new framework for analysis of a more general class of recursive algorithms known as stochastic approximation.

1417

Lipschitz-Certifiable Training with a Tight Outer Bound

In this study, we propose a fast and scalable certifiable training algorithm based on Lipschitz analysis and interval arithmetic.

1418

Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint

We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms.

1419

Conformal Symplectic and Relativistic Optimization

Here we study structure-preserving discretizations for a certain class of dissipative (conformal) Hamiltonian systems, allowing us to analyze the symplectic structure of both Nesterov and heavy ball, besides providing several new insights into these methods.

1420

Bayes Consistency vs. H-Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class

However, follow-up work has suggested this framework can be of limited value when studying H-consistency; in particular, concerns have been raised that even when the data comes from an underlying linear model, minimizing certain convex calibrated surrogates over linear scoring functions fails to recover the true model (Long and Servedio, 2013). In this paper, we investigate this apparent conundrum.

1421

Inverting Gradients – How easy is it to break privacy in federated learning?

However, by exploiting a magnitude-invariant loss along with optimization strategies based on adversarial attacks, we show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks.

1422

Dynamic allocation of limited memory resources in reinforcement learning

In this article, we propose a dynamical framework to maximize expected reward under constraints of limited resources, which we implement with a cost function that penalizes precise representations of action-values in memory, each of which may vary in its precision.

1423

CryptoNAS: Private Inference on a ReLU Budget

This paper makes the observation that existing models are ill-suited for PI and proposes a novel NAS method, named CryptoNAS, for finding and tailoring models to the needs of PI.

1424

A Stochastic Path Integral Differential EstimatoR Expectation Maximization Algorithm

This paper introduces a novel EM algorithm, called {\tt SPIDER-EM}, for inference from a training set of size $n$, $n \gg 1$.

1425

CHIP: A Hawkes Process Model for Continuous-time Networks with Scalable and Consistent Estimation

We propose the Community Hawkes Independent Pairs (CHIP) generative model for such networks.

1426

SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection

In this paper, we present a method for accelerating and structuring self-attentions: Sparse Adaptive Connection (SAC).

1427

Design Space for Graph Neural Networks

Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations.

1428

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis.

1429

Unbalanced Sobolev Descent

We introduce Unbalanced Sobolev Descent (USD), a particle descent algorithm for transporting a high dimensional source distribution to a target distribution that does not necessarily have the same mass.

1430

Identifying Mislabeled Data using the Area Under the Margin Ranking

This paper introduces a new method to identify such samples and mitigate their impact when training neural networks.

1431

Combining Deep Reinforcement Learning and Search for Imperfect-Information Games

This paper presents ReBeL, a general framework for self-play reinforcement learning and search that provably converges to a Nash equilibrium in any two-player zero-sum game.

1432

High-Throughput Synchronous Deep RL

To combine the advantages of both methods we propose High-Throughput Synchronous Deep Reinforcement Learning (HTS-RL).

1433

Contrastive Learning with Adversarial Examples

This paper addresses the problem, by introducing a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.

1434

Mixed Hamiltonian Monte Carlo for Mixed Discrete and Continuous Variables

In this paper, we propose mixed HMC (M-HMC) as a general framework to address this limitation.

1435

Adversarial Sparse Transformer for Time Series Forecasting

To solve these issues, in this paper, we propose a new time series forecasting model — Adversarial Sparse Transformer (AST), based on Generated Adversarial Networks (GANs).

1436

The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks

In this work, we show that these common perceptions can be completely false in the early phase of learning.

1437

CLEARER: Multi-Scale Neural Architecture Search for Image Restoration

Different from the existing labor-intensive handcrafted architecture design paradigms, we present a novel method, termed as multi-sCaLe nEural ARchitecture sEarch for image Restoration (CLEARER), which is a speci?cally designed neural architecture search (NAS) for image restoration.

1438

Hierarchical Gaussian Process Priors for Bayesian Neural Network Weights

To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for network weights based on unit embeddings that can flexibly encode correlated weight structures, and (ii) input-dependent versions of these weight priors that can provide convenient ways to regularize the function space through the use of kernels defined on contextual inputs.

1439

Compositional Explanations of Neurons

We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior.

1440

Calibrated Reliable Regression using Maximum Mean Discrepancy

In this paper, we are concerned with getting well-calibrated predictions in regression tasks.

1441

Directional convergence and alignment in deep learning

In this paper, we show that although the minimizers of cross-entropy and related classification losses are off at infinity, network weights learned by gradient flow converge in direction, with an immediate corollary that network predictions, training errors, and the margin distribution also converge.

1442

Functional Regularization for Representation Learning: A Unified Theoretical Perspective

We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of (Balcan and Blum, 2010) to allow learnable regularization functions.

1443

Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits

In this work, we introduce the first provably efficient PBT-style algorithm, Population-Based Bandits (PB2).

1444

Understanding Global Feature Contributions With Additive Importance Measures

We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature.

1445

Online Non-Convex Optimization with Imperfect Feedback

We consider the problem of online learning with non-convex losses. In terms of feedback, we assume that the learner observes – or otherwise constructs – an inexact model for the loss function encountered at each stage, and we propose a mixed-strategy learning policy based on dual averaging.

1446

Co-Tuning for Transfer Learning

To \textit{fully} transfer pre-trained models, we propose a two-step framework named \textbf{Co-Tuning}: (i) learn the relationship between source categories and target categories from the pre-trained model and calibrated predictions; (ii) target labels (one-hot labels), as well as source labels (probabilistic labels) translated by the category relationship, collaboratively supervise the fine-tuning process.

1447

Multifaceted Uncertainty Estimation for Label-Efficient Deep Learning

We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data.

1448

Continuous Surface Embeddings

In this work, we focus on the task of learning and representing dense correspondences in deformable object categories.

1449

Succinct and Robust Multi-Agent Communication With Temporal Message Control

In this paper, we present \textit{Temporal Message Control} (TMC), a simple yet effective approach for achieving succinct and robust communication in MARL.

1450

Big Bird: Transformers for Longer Sequences

To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear.

1451

Neural Execution Engines: Learning to Execute Subroutines

To address the issue, we propose a learned conditional masking mechanism, which enables the model to strongly generalize far outside of its training range with near-perfect accuracy on a variety of algorithms.

1452

Random Reshuffling: Simple Analysis with Vast Improvements

We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR. To go beyond strong convexity, we present several results for non-strongly convex and non-convex objectives.

1453

Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors

In this work we propose a framework for visual prediction and planning that is able to overcome both of these limitations.

1454

Statistical Optimal Transport posed as Learning Kernel Embedding

This work takes the novel approach of posing statistical OT as that of learning the transport plan’s kernel mean embedding from sample based estimates of marginal embeddings.

1455

Dual-Resolution Correspondence Networks

In this work, we introduce Dual-Resolution Correspondence Networks (DualRC-Net), to obtain pixel-wise correspondences in a coarse-to-fine manner.

1456

Advances in Black-Box VI: Normalizing Flows, Importance Weighting, and Optimization

In this paper, we postulate that black-box VI is best addressed through a careful combination of numerous algorithmic components.

1457

f-Divergence Variational Inference

This paper introduces the f-divergence variational inference (f-VI) that generalizes variational inference to all f-divergences.

1458

Unfolding recurrence by Green?s functions for optimized reservoir computing

The purpose of this work is to present a solvable recurrent network model that links to feed forward networks.

1459

The Dilemma of TriHard Loss and an Element-Weighted TriHard Loss for Person Re-Identification

Several methods to alleviate the dilemma are designed and tested. In the meanwhile, an element-weighted TriHard loss is emphatically proposed to enlarge the distance between partial elements of feature vectors selectively which represent the different characteristics between anchors and hard negative samples.

1460

Disentangling by Subspace Diffusion

We present a novel nonparametric algorithm for symmetry-based disentangling of data manifolds, the Geometric Manifold Component Estimator (GEOMANCER).

1461

Towards Neural Programming Interfaces

We recast the problem of controlling natural language generation as that of learning to interface with a pretrained language model, just as Application Programming Interfaces (APIs) control the behavior of programs by altering hyperparameters.

1462

Discovering Symbolic Models from Deep Learning with Inductive Biases

We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases.

1463

Real World Games Look Like Spinning Tops

This paper investigates the geometrical properties of real world games (e.g. Tic-Tac-Toe, Go, StarCraft II).

1464

Cooperative Heterogeneous Deep Reinforcement Learning

In this work, we present a Cooperative Heterogeneous Deep Reinforcement Learning (CHDRL) framework that can learn a policy by integrating the advantages of heterogeneous agents.

1465

Mitigating Forgetting in Online Continual Learning via Instance-Aware Parameterization

To mitigate this, we leverage the concept of “instance awareness” in the neural network, where each data instance is classified by a path in the network searched by the controller from a meta-graph.

1466

ImpatientCapsAndRuns: Approximately Optimal Algorithm Configuration from an Infinite Pool

Inspired by this idea, we introduce ImpatientCapsAndRuns, which quickly discards less promising configurations, significantly speeding up the search procedure compared to previous algorithms with theoretical guarantees, while still achieving optimal runtime up to logarithmic factors under mild assumptions.

1467

Dense Correspondences between Human Bodies via Learning Transformation Synchronization on Graphs

We introduce an approach for establishing dense correspondences between partial scans of human models and a complete template model.

1468

Reasoning about Uncertainties in Discrete-Time Dynamical Systems using Polynomial Forms.

In this paper, we propose polynomial forms to represent distributions of state variables over time for discrete-time stochastic dynamical systems.

1469

Applications of Common Entropy for Causal Inference

To efficiently compute common entropy, we propose an iterative algorithm that can be used to discover the trade-off between the entropy of the latent variable and the conditional mutual information of the observed variables.

1470

SGD with shuffling: optimal rates without component convexity and large epoch requirements

Specifically, depending on how the indices of the finite-sum are shuffled, we consider the RandomShuffle (shuffle at the beginning of each epoch) and SingleShuffle (shuffle only once) algorithms.

1471

Unsupervised Joint k-node Graph Representations with Compositional Energy-Based Models

We propose MHM-GNN, an inductive unsupervised graph representation approach that combines joint k-node representations with energy-based models (hypergraph Markov networks) and GNNs.

1472

Neural Manifold Ordinary Differential Equations

In this paper, we study normalizing flows on manifolds.

1473

CO-Optimal Transport

To circumvent this limitation, we propose a novel OT problem, named COOT for CO-Optimal Transport, that simultaneously optimizes two transport maps between both samples and features, contrary to other approaches that either discard the individual features by focusing on pairwise distances between samples or need to model explicitly the relations between them.

1474

Continuous Meta-Learning without Tasks

In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with unsegmented time series data.

1475

A mathematical theory of cooperative communication

Through a connection to the theory of optimal transport, we establishing a mathematical framework for cooperative communication.

1476

Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets

We study the problem of sampling from a probability distribution on $\mathbb R^p$ defined via a convex and smooth potential function.

1477

Learning Invariances in Neural Networks from Training Data

We show how to learn invariances by parameterizing a distribution over augmentations and optimizing the training loss simultaneously with respect to the network parameters and augmentation parameters.

1478

A Finite-Time Analysis of Two Time-Scale Actor-Critic Methods

In this work, we provide a non-asymptotic analysis for two time-scale actor-critic methods under non-i.i.d. setting.

1479

Pruning Filter in Filter

To converge the strength of both methods, we propose to prune the filter in the filter.

1480

Learning to Mutate with Hypergradient Guided Population

In this study, we propose a hyperparameter mutation (HPM) algorithm to explicitly consider a learnable trade-off between using global and local search, where we adopt a population of student models to simultaneously explore the hyperparameter space guided by hypergradient and leverage a teacher model to mutate the underperforming students by exploiting the top ones.

1481

A convex optimization formulation for multivariate regression

In this article, we propose a convex optimization formulation for high-dimensional multivariate linear regression under a general error covariance structure.

1482

Online Meta-Critic Learning for Off-Policy Actor-Critic Methods

In this paper, we introduce a flexible and augmented meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning.

1483

The All-or-Nothing Phenomenon in Sparse Tensor PCA

We study the statistical problem of estimating a rank-one sparse tensor corrupted by additive gaussian noise, a Gaussian additive model also known as sparse tensor PCA.

1484

Synthesize, Execute and Debug: Learning to Repair for Neural Program Synthesis

In this work, we propose SED, a neural program generation framework that incorporates synthesis, execution, and debugging stages.

1485

ARMA Nets: Expanding Receptive Field for Dense Prediction

In this work, we propose to replace any traditional convolutional layer with an autoregressive moving-average (ARMA) layer, a novel module with an adjustable receptive field controlled by the learnable autoregressive coefficients.

1486

Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations

We propose a novel multi-objective Bayesian optimization algorithm that iteratively selects the best batch of samples to be evaluated in parallel.

1487

SOLOv2: Dynamic and Fast Instance Segmentation

In this work, we design a simple, direct, and fast framework for instance segmentation with strong performance.

1488

Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization

This paper shows that with a {\em double over-parameterization} for both the low-rank matrix and sparse corruption, gradient descent with {\em discrepant learning rates} provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption.

1489

Axioms for Learning from Pairwise Comparisons

We show that a large class of random utility models (including the Thurstone–Mosteller Model), when estimated using the MLE, satisfy a Pareto efficiency condition.

1490

Continuous Regularized Wasserstein Barycenters

Leveraging a new dual formulation for the regularized Wasserstein barycenter problem, we introduce a stochastic algorithm that constructs a continuous approximation of the barycenter.

1491

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting

In this paper, we propose Spectral Temporal Graph Neural Network (StemGNN) to further improve the accuracy of multivariate time-series forecasting.

1492

Online Multitask Learning with Long-Term Memory

We provide an algorithm that predicts on each trial in time linear in the number of hypotheses when the hypothesis class is finite.

1493

Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer Proxies

In this paper, we propose a novel Proxy-based deep Graph Metric Learning (ProxyGML) approach from the perspective of graph classification, which uses fewer proxies yet achieves better comprehensive performance.

1494

Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting

In this paper, we argue that learning node-specific patterns is essential for traffic forecasting while pre-defined graph is avoidable.

1495

On Reward-Free Reinforcement Learning with Linear Function Approximation

In this work, we give both positive and negative results for reward-free RL with linear function approximation.

1496

Robustness of Community Detection to Random Geometric Perturbations

We consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph.

1497

Learning outside the Black-Box: The pursuit of interpretable models

This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.

1498

Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization

We study two variants that are based on non-reversible Langevin diffusions: the underdamped Langevin dynamics (ULD) and the Langevin dynamics with a non-symmetric drift (NLD).

1499

Robust large-margin learning in hyperbolic space

In this paper, we present, to our knowledge, the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space.

1500

Replica-Exchange Nos\'e-Hoover Dynamics for Bayesian Learning on Large Datasets

In this paper, we present a new practical method for Bayesian learning that can rapidly draw representative samples from complex posterior distributions with multiple isolated modes in the presence of mini-batch noise.

1501

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

The goal of our work is to produce networks which both perform well at few-shot classification tasks and are simultaneously robust to adversarial examples.

1502

Neural Anisotropy Directions

In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers.

1503

Digraph Inception Convolutional Networks

In this paper, we theoretically extend spectral-based graph convolution to digraphs and derive a simplified form using personalized PageRank.

1504

PAC-Bayesian Bound for the Conditional Value at Risk

This paper presents a generalization bound for learning algorithms that minimize the $\textsc{CVaR}$ of the empirical loss.

1505

Stochastic Stein Discrepancies

To address this deficiency, we show that stochastic Stein discrepancies (SSDs) based on subsampled approximations of the Stein operator inherit the convergence control properties of standard SDs with probability 1.

1506

On the Role of Sparsity and DAG Constraints for Learning Linear DAGs

In this paper, we study the asymptotic role of the sparsity and DAG constraints for learning DAG models in the linear Gaussian and non-Gaussian cases, and investigate their usefulness in the finite sample regime.

1507

Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search

To alleviate this problem, we present a simple yet effective architecture distillation method.

1508

Fair Multiple Decision Making Through Soft Interventions

In this paper, we propose an approach that learns multiple classifiers and achieves fairness for all of them simultaneously, by treating each decision model as a soft intervention and inferring the post-intervention distributions to formulate the loss function as well as the fairness constraints.

1509

Representation Learning for Integrating Multi-domain Outcomes to Optimize Individualized Treatment

To address these challenges, we propose an integrated learning framework that can simultaneously learn patients’ underlying mental states and recommend optimal treatments for each individual.

1510

Learning to Play No-Press Diplomacy with Best Response Policy Iteration

We propose a simple yet effective approximate best response operator, designed to handle large combinatorial action spaces and simultaneous moves.

1511

Inverse Learning of Symmetries

We propose to learn the symmetry transformation with a model consisting of two latent subspaces, where the first subspace captures the target and the second subspace the remaining invariant information.

1512

DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling

In this work we propose novel approaches for graph convolution, pooling and unpooling, inspired from finite differences and algebraic multigrid frameworks.

1513

Distributed Newton Can Communicate Less and Resist Byzantine Workers

We propose an iterative approximate Newton-type algorithm, where the worker machines communicate \emph{only once} per iteration with the central machine.

1514

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.

1515

Effective Diversity in Population Based Reinforcement Learning

In this paper, we introduce an approach to optimize all members of a population simultaneously.

1516

Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data

We propose a novel unsupervised generative model that learns to disentangle object identity from other low-level aspects in class-imbalanced data.

1517

Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces

We show how to combine these techniques to yield a reinforcement learning algorithm that approximates a policy gradient by finding trajectories that optimize a random objective.

1518

Hybrid Models for Learning to Branch

In this work, we ask two key questions. First, in a more realistic setting where only a CPU is available, is the GNN model still competitive? Second, can we devise an alternate computationally inexpensive model that retains the predictive power of the GNN architecture?

1519

WoodFisher: Efficient Second-Order Approximation for Neural Network Compression

Our work considers this question, examines the accuracy of existing approaches, and proposes a method called WoodFisher to compute a faithful and efficient estimate of the inverse Hessian.

1520

Bi-level Score Matching for Learning Energy-based Latent Variable Models

This paper presents a bi-level score matching (BiSM) method to learn EBLVMs with general structures by reformulating SM as a bi-level optimization problem.

1521

Counterfactual Contrastive Learning for Weakly-Supervised Vision-Language Grounding

In this paper, we propose a novel Counterfactual Contrastive Learning (CCL) to develop sufficient contrastive training between counterfactual positive and negative results, which are based on robust and destructive counterfactual transformations.

1522

Decision trees as partitioning machines to characterize their generalization properties

We introduce the notion of partitioning function, and we relate it to the growth function and to the VC dimension.

1523

Learning to Prove Theorems by Learning to Generate Theorems

To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.

1524

3D Self-Supervised Methods for Medical Imaging

In this work, we leverage these techniques, and we propose 3D versions for five different self-supervised methods, in the form of proxy tasks.

1525

Bayesian filtering unifies adaptive and non-adaptive neural network optimization methods

We formulate the problem of neural network optimization as Bayesian filtering, where the observations are backpropagated gradients.

1526

Worst-Case Analysis for Randomly Collected Data

We introduce a framework for statistical estimation that leverages knowledge of how samples are collected but makes no distributional assumptions on the data values.

1527

Truthful Data Acquisition via Peer Prediction

We consider the problem of purchasing data for machine learning or statistical estimation.

1528

Learning Robust Decision Policies from Observational Data

In this paper, we develop a method for learning policies that reduce tails of the cost distribution at a specified level and, moreover, provide a statistically valid bound on the cost of each decision.

1529

Byzantine Resilient Distributed Multi-Task Learning

In this paper, we present an approach for Byzantine resilient distributed multi-task learning.

1530

Reinforcement Learning in Factored MDPs: Oracle-Efficient Algorithms and Tighter Regret Bounds for the Non-Episodic Setting

We propose two near-optimal and oracle-efficient algorithms for FMDPs.

1531

Improving model calibration with accuracy versus uncertainty optimization

We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.

1532

The Convolution Exponential and Generalized Sylvester Flows

This paper introduces a new method to build linear flows, by taking the exponential of a linear transformation.

1533

An Improved Analysis of Stochastic Gradient Descent with Momentum

In this work, we show that SGDM converges as fast as SGD for smooth objectives under both strongly convex and nonconvex settings.

1534

Precise expressions for random projections: Low-rank approximation and randomized Newton

We exploit recent developments in the spectral analysis of random matrices to develop novel techniques that provide provably accurate expressions for the expected value of random projection matrices obtained via sketching.

1535

The MAGICAL Benchmark for Robust Imitation

This paper presents the MAGICAL benchmark suite, which permits systematic evaluation of generalisation by quantifying robustness to different kinds of distribution shift that an IL algorithm is likely to encounter in practice.

1536

X-CAL: Explicit Calibration for Survival Analysis

We develop explicit calibration (X-CAL), which turns D-CALIBRATION into a differentiable objective that can be used in survival modeling alongside maximum likelihood estimation and other objectives.

1537

Decentralized Accelerated Proximal Gradient Descent

In this paper, we study the decentralized composite optimization problem with a non-smooth regularization term.

1538

Making Non-Stochastic Control (Almost) as Easy as Stochastic

In this paper, we show that the same regret rate (against a suitable benchmark) is attainable even in the considerably more general non-stochastic control model, where the system is driven by \emph{arbitrary adversarial} noise \citep{agarwal2019online}.

1539

BERT Loses Patience: Fast and Robust Inference with Early Exit

In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM).

1540

Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization

We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.

1541

BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning

We propose a new algorithm, Best-Action Imitation Learning (BAIL), which strives for both simplicity and performance.

1542

Regularizing Towards Permutation Invariance In Recurrent Models

We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recursive architectures.

1543

What Did You Think Would Happen? Explaining Agent Behaviour through Intended Outcomes

We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome.

1544

Batch normalization provably avoids ranks collapse for randomly initialised deep networks

In this work we highlight the fact that batch normalization is an effective strategy to avoid rank collapse for both linear and ReLU networks.

1545

Choice Bandits

We propose an algorithm for choice bandits, termed Winner Beats All (WBA), with distribution dependent $O(\log T)$ regret bound under all these choice models.

1546

What if Neural Networks had SVDs?

We present an algorithm that is fast enough to speed up several matrix operations.

1547

A Matrix Chernoff Bound for Markov Chains and Its Application to Co-occurrence Matrices

We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a regular (aperiodic and irreducible) finite Markov chain.

1548

CoMIR: Contrastive Multimodal Image Representation for Registration

We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations).

1549

Ensuring Fairness Beyond the Training Data

In this work, we develop classifiers that are fair not only with respect to the training distribution but also for a class of distributions that are weighted perturbations of the training samples.

1550

How do fair decisions fare in long-term qualification?

In this work, we study the dynamics of population qualification and algorithmic decisions under a partially observed Markov decision problem setting.

1551

Pre-training via Paraphrasing

We introduce MARGE, a pre-trained sequence-to-sequence model learned with an unsupervised multi-lingual multi-document paraphrasing objective.

1552

GCN meets GPU: Decoupling ?When to Sample? from ?How to Sample?

By decoupling the frequency of sampling from the sampling strategy, we propose LazyGCN, a general yet effective framework that can be integrated with any sampling strategy to substantially improve the training time.

1553

Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks

This paper proposes such a technique to learn both types of tasks in the same network.

1554

All your loss are belong to Bayes

In this paper, we rely on a broader view of proper composite losses and a recent construct from information geometry, source functions, whose fitting alleviates constraints faced by canonical links.

1555

HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks

Here, we present HAWQ-V2 which addresses these shortcomings. For (i), we theoretically prove that the right sensitivity metric is the average Hessian trace, instead of just top Hessian eigenvalue. For (ii), we develop a Pareto frontier based method for automatic bit precision selection of different layers without any manual intervention. For (iii), we develop the first Hessian based analysis for mixed-precision activation quantization, which is very beneficial for object detection.

1556

Sample-Efficient Reinforcement Learning of Undercomplete POMDPs

In particular, we present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works.

1557

Non-Convex SGD Learns Halfspaces with Adversarial Label Noise

We study the problem of agnostically learning homogeneous halfspaces in the distribution-specific PAC model.

1558

A Tight Lower Bound and Efficient Reduction for Swap Regret

Besides, we present a computationally efficient reduction method that converts no-external-regret algorithms to no-swap-regret algorithms.

1559

DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction

In this paper, we study how RL methods based on bootstrapping-based Q-learning can suffer from a pathological interaction between function approximation and the data distribution used to train the Q-function: with standard supervised learning, online data collection should induce corrective feedback, where new data corrects mistakes in old predictions.

1560

OTLDA: A Geometry-aware Optimal Transport Approach for Topic Modeling

We present an optimal transport framework for learning topics from textual data.

1561

Measuring Robustness to Natural Distribution Shifts in Image Classification

We study how robust current ImageNet models are to distribution shifts arising from natural variations in datasets.

1562

Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference

We propose a general Bayesian framework that can augment labeled data with unlabeled data to produce more accurate and lower-variance estimates compared to methods based on labeled data alone.

1563

RandAugment: Practical Automated Data Augmentation with a Reduced Search Space

In this work, we rethink the process of designing automated data augmentation strategies.

1564

Asymptotic normality and confidence intervals for derivatives of 2-layers neural network in the random features model

We show that a weighted average of the derivatives of the trained NN at the observed data is asymptotically normal, in a setting with Lipschitz activation functions in a linear regression response with Gaussian features under possibly non-linear perturbations.

1565

DisARM: An Antithetic Gradient Estimator for Binary Latent Variables

We show that ARM can be improved by analytically integrating out the randomness introduced by the augmentation process, guaranteeing substantial variance reduction. Our estimator, DisARM, is simple to implement and has the same computational cost as ARM.

1566

Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings

We propose a framework that lifts the capabilities of graph convolutional networks (GCNs) to scenarios where no input graph is given and increases their robustness to adversarial attacks.

1567

Supervised Contrastive Learning

In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information.

1568

Learning Optimal Representations with the Decodable Information Bottleneck

We propose the Decodable Information Bottleneck (DIB) that considers information retention and compression from the perspective of the desired predictive family.

1569

Meta-trained agents implement Bayes-optimal agents

Inspired by ideas from theoretical computer science, we show that meta-learned and Bayes-optimal agents not only behave alike, but they even share a similar computational structure, in the sense that one agent system can approximately simulate the other.

1570

Learning Agent Representations for Ice Hockey

We introduce a novel player representation via player generation framework where a variational encoder embeds player information with latent variables.

1571

Weak Form Generalized Hamiltonian Learning

We present a method for learning generalized Hamiltonian decompositions of ordinary differential equations given a set of noisy time series measurements.

1572

Neural Non-Rigid Tracking

We introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction by a learned robust optimization.

1573

Collegial Ensembles

In this work, we investigate a form of over-parameterization achieved through ensembling, where we define collegial ensembles (CE) as the aggregation of multiple independent models with identical architectures, trained as a single model.

1574

ICNet: Intra-saliency Correlation Network for Co-Saliency Detection

In this paper, we propose an Intra-saliency Correlation Network (ICNet) to extract intra-saliency cues from the single image saliency maps (SISMs) predicted by any off-the-shelf SOD method, and obtain inter-saliency cues by correlation techniques.

1575

Improved Variational Bayesian Phylogenetic Inference with Normalizing Flows

In this paper, we propose a new type of VBPI, VBPI-NF, as a first step to empower phylogenetic posterior estimation with deep learning techniques.

1576

Deep Metric Learning with Spherical Embedding

In this paper, we first investigate the effect of the embedding norm for deep metric learning with angular distance, and then propose a spherical embedding constraint (SEC) to regularize the distribution of the norms.

1577

Preference-based Reinforcement Learning with Finite-Time Guarantees

If preferences are stochastic, and the preference probability relates to the hidden reward values, we present algorithms for PbRL, both with and without a simulator, that are able to identify the best policy up to accuracy $\varepsilon$ with high probability.

1578

AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients

We propose AdaBelief to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability.

1579

Interpretable Sequence Learning for Covid-19 Forecasting

We propose a novel approach that integrates machine learning into compartmental disease modeling (e.g., SEIR) to predict the progression of COVID-19.

1580

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

Under this less pessimistic model of one-decision confounding, we propose an efficient loss-minimization-based procedure for computing worst-case bounds, and prove its statistical consistency.

1581

Modern Hopfield Networks and Attention for Immune Repertoire Classification

In this work, we present our novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification.

1582

One Ring to Rule Them All: Certifiably Robust Geometric Perception with Outliers

We propose the first general and practical framework to design certifiable algorithms for robust geometric perception in the presence of a large amount of outliers.

1583

Task-Robust Model-Agnostic Meta-Learning

We present an algorithm to solve the proposed min-max problem, and show that it converges to an $\epsilon$-accurate point at the optimal rate of $\mathcal{O}(1/\epsilon^2)$ in the convex setting and to an $(\epsilon, \delta)$-stationary point at the rate of $\mathcal{O}(\max\{1/\epsilon^5, 1/\delta^5\})$ in nonconvex settings.

1584

R-learning in actor-critic model offers a biologically relevant mechanism for sequential decision-making

In this work, we build interpretable deep actor-critic models to show that R-learning – a reinforcement learning (RL) approach balancing short-term and long-term rewards – is consistent with the way real-life agents may learn making stay-or-leave decisions.

1585

Revisiting Frank-Wolfe for Polytopes: Strict Complementarity and Sparsity

We then revisit the addition of a strict complementarity assumption already considered in Wolfe’s classical book \cite{Wolfe1970}, and prove that under this condition, the Frank-Wolfe method with away-steps and line-search converges linearly with rate that depends explicitly only on the dimension of the optimal face, hence providing a significant improvement in case the optimal solution is sparse.

1586

Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet Log-Sobolev

Our work generalizes the results of \cite{VW19} where f is defined on a manifold M rather than Rn.

1587

Tensor Completion Made Practical

In this paper we introduce a new variant of alternating minimization, which in turn is inspired by understanding how the progress measures that guide convergence of alternating minimization in the matrix setting need to be adapted to the tensor setting.

1588

Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks

In this study, we derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.

1589

Content Provider Dynamics and Coordination in Recommendation Ecosystems

In this work, we investigate the dynamics of content creation using a game-theoretic lens.

1590

Almost Surely Stable Deep Dynamics

We introduce a method for learning provably stable deep neural network based dynamic models from observed data.

1591

Experimental design for MRI by greedy policy search

We propose to learn experimental design strategies for accelerated MRI with policy gradient methods.

1592

Expert-Supervised Reinforcement Learning for Offline Policy Learning and Evaluation

To overcome these issues, we propose an Expert-Supervised RL (ESRL) framework which uses uncertainty quantification for offline policy learning.

1593

ColdGANs: Taming Language GANs with Cautious Sampling Strategies

In this work, we show how the most popular sampling method results in unstable training for language GANs.

1594

Hedging in games: Faster convergence of external and swap regrets

We consider the setting where players run the Hedge algorithm or its optimistic variant \cite{syrgkanis2015fast} to play an n-action game repeatedly for T rounds.

1595

The Origins and Prevalence of Texture Bias in Convolutional Neural Networks

By taking less aggressive random crops at training time and applying simple, naturalistic augmentation (color distortion, noise, and blur), we train models that classify ambiguous images by shape a majority of the time, and outperform baselines on out-of-distribution test sets.

1596

Time-Reversal Symmetric ODE Network

In this paper, we propose a novel loss function that measures how well our ordinary differential equation (ODE) networks comply with this time-reversal symmetry; it is formally defined by the discrepancy in the time evolutions of ODE networks between forward and backward dynamics.

1597

Provable Overlapping Community Detection in Weighted Graphs

In this paper, we provide a provable method to detect overlapping communities in weighted graphs without explicitly making the pure nodes assumption.

1598

Fast Unbalanced Optimal Transport on a Tree

This study examines the time complexities of the unbalanced optimal transport problems from an algorithmic perspective for the first time.

1599

Acceleration with a Ball Optimization Oracle

Perhaps surprisingly, this is not optimal: we design an accelerated algorithm which attains an epsilon-approximate minimizer with roughly r^{-2/3} \log(1/epsilon) oracle queries, and give a matching lower bound.

1600

Avoiding Side Effects By Considering Future Tasks

To alleviate the burden on the reward designer, we propose an algorithm to automatically generate an auxiliary reward function that penalizes side effects.

1601

Handling Missing Data with Graph Representation Learning

Here we propose GRAPE, a framework for feature imputation as well as label prediction.

1602

Improving Auto-Augment via Augmentation-Wise Weight Sharing

In this paper, we dive into the dynamics of augmented training of the model. This inspires us to design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process in an elegant way.

1603

MMA Regularization: Decorrelating Weights of Neural Networks by Maximizing the Minimal Angles

Inspired by the well-known Tammes problem, we propose a novel diversity regularization method to address this issue, which makes the normalized weight vectors of neurons or filters distributed on a hypersphere as uniformly as possible, through maximizing the minimal pairwise angles (MMA).

1604

HRN: A Holistic Approach to One Class Learning

This paper proposes an entirely different approach based on a novel regularization, called holistic regularization (or H-regularization), which enables the system to consider the data holistically, not to produce a model that biases towards some features.

1605

The Generalized Lasso with Nonlinear Observations and Generative Priors

In this paper, we study the problem of signal estimation from noisy non-linear measurements when the unknown $n$-dimensional signal is in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.

1606

Fair regression via plug-in estimator and recalibration with statistical guarantees

We study the problem of learning an optimal regression function subject to a fairness constraint.

1607

Modeling Shared responses in Neuroimaging Studies through MultiView ICA

We propose a novel MultiView Independent Component Analysis (ICA) model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.

1608

Efficient Planning in Large MDPs with Weak Linear Function Approximation

We consider the planning problem in MDPs using linear value function approximation with only weak requirements: low approximation error for the optimal value function, and a small set of “core” states whose features span those of other states.

1609

Efficient Learning of Generative Models via Finite-Difference Score Matching

To improve computing efficiency, we rewrite the SM objective and its variants in terms of directional derivatives, and present a generic strategy to efficiently approximate any-order directional derivative with finite difference~(FD).

1610

Semialgebraic Optimization for Lipschitz Constants of ReLU Networks

We introduce a semidefinite programming hierarchy to estimate the global and local Lipschitz constant of a multiple layer deep neural network.

1611

Linear-Sample Learning of Low-Rank Distributions

For all of them, we show that learning $k\times k$, rank-$r$, matrices to normalized $L_1$ distance $\epsilon$ requires $\Omega(\frac{kr}{\epsilon^2})$ samples, and propose an algorithm that uses ${\cal O}(\frac{kr}{\epsilon^2}\log^2\frac r\epsilon)$ samples, a number linear in the high dimension, and nearly linear in the, typically low, rank.

1612

Transferable Calibration with Lower Bias and Variance in Domain Adaptation

In this paper, we delve into the open problem of Calibration in DA, which is extremely challenging due to the coexistence of domain shift and the lack of target labels.

1613

Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics

We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error.

1614

Online Bayesian Goal Inference for Boundedly Rational Planning Agents

Here we present an architecture capable of inferring an agent’s goals online from both optimal and non-optimal sequences of actions.

1615

BayReL: Bayesian Relational Learning for Multi-omics Data Integration

In this paper, we develop a novel Bayesian representation learning method that infers the relational interactions across multi-omics data types.

1616

Weakly Supervised Deep Functional Maps for Shape Matching

Furthermore, we propose a novel framework designed for both full-to-full as well as partial to full shape matching that achieves state of the art results on several benchmark datasets outperforming, even the fully supervised methods.

1617

Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

In this paper, we propose a new assumption, \textit{generalized label shift} ($\glsa$), to improve robustness against mismatched label distributions.

1618

Rethinking the Value of Labels for Improving Class-Imbalanced Learning

We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners.

1619

Provably Robust Metric Learning

In this paper, we show that existing metric learning algorithms, which focus on boosting the clean accuracy, can result in metrics that are less robust than the Euclidean distance.

1620

Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings

In this paper, we propose an end-to-end graph learning framework, namely \textbf{I}terative \textbf{D}eep \textbf{G}raph \textbf{L}earning (\alg), for jointly and iteratively learning graph structure and graph embedding.

1621

COPT: Coordinated Optimal Transport on Graphs

We introduce COPT, a novel distance metric between graphs defined via an optimization routine, computing a coordinated pair of optimal transport maps simultaneously.

1622

No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems

We propose GEORGE, a method to both measure and mitigate hidden stratification even when subclass labels are unknown.

1623

Model Rubik?s Cube: Twisting Resolution, Depth and Width for TinyNets

This paper aims to explore the twisting rules for obtaining deep neural networks with minimum model sizes and computational costs.

1624

Self-Adaptive Training: beyond Empirical Risk Minimization

In this paper, we observe that model predictions can substantially benefit the training process: self-adaptive training significantly mitigates the overfitting issue and improves generalization over ERM under both random and adversarial noises.

1625

Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization

We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.

1626

Near-Optimal Comparison Based Clustering

We theoretically show that our approach can exactly recover a planted clustering using a near-optimal number of passive comparisons.

1627

Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement

We present a video-based and on-device optical cardiopulmonary vital sign measurement approach.

1628

A new convergent variant of Q-learning with linear function approximation

In this work, we identify a novel set of conditions that ensure convergence with probability 1 of Q-learning with linear function approximation, by proposing a two time-scale variation thereof.

1629

TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation

To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion.

1630

Neural Networks with Small Weights and Depth-Separation Barriers

In this paper, we focus on feedforward ReLU networks, and prove fundamental barriers to proving such results beyond depth $4$, by reduction to open problems and natural-proof barriers in circuit complexity.

1631

Untangling tradeoffs between recurrence and self-attention in artificial neural networks

In this work, we present a formal analysis of how self-attention affects gradient propagation in recurrent networks, and prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies by establishing concrete bounds for gradient norms.

1632

Dual-Free Stochastic Decentralized Optimization with Variance Reduction

In this work, we introduce a Decentralized stochastic algorithm with Variance Reduction called DVR.

1633

Online Learning in Contextual Bandits using Gated Linear Networks

We introduce a new and completely online contextual bandit algorithm called Gated Linear Contextual Bandits (GLCB).

1634

Throughput-Optimal Topology Design for Cross-Silo Federated Learning

In this paper we define the problem of topology design for cross-silo federated learning using the theory of max-plus linear systems to compute the system throughput—number of communication rounds per time unit.

1635

Quantized Variational Inference

We present Quantized Variational Inference, a new algorithm for Evidence Lower Bound minimization.

1636

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

In this paper, we study \emph{minibatch MH} methods, which instead use subsamples to enable scaling.

1637

Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search

In this paper, we coin LA-MCTS that extends LaNAS to other domains.

1638

Feature Shift Detection: Localizing Which Features Have Shifted via Conditional Distribution Tests

Thus, we first define a formalization of this problem as multiple conditional distribution hypothesis tests and propose both non-parametric and parametric statistical tests.

1639

Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks

In this work, we present a comparative study of the two methods and propose a new supervised learning method that combines them.

1640

Space-Time Correspondence as a Contrastive Random Walk

This paper proposes a simple self-supervised approach for learning a representation for visual correspondence from raw video.

1641

The Flajolet-Martin Sketch Itself Preserves Differential Privacy: Private Counting with Minimal Space

We propose an $(\epsilon,\delta)$-differentially private algorithm that approximates $\dist$ within a factor of $(1\pm\gamma)$, and with additive error of $O(\sqrt{\ln(1/\delta)}/\epsilon)$, using space $O(\ln(\ln(u)/\gamma)/\gamma^2)$.

1642

Exponential ergodicity of mirror-Langevin diffusions

Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020).

1643

An Efficient Framework for Clustered Federated Learning

We propose a new framework dubbed the Iterative Federated Clustering Algorithm (IFCA), which alternately estimates the cluster identities of the users and optimizes model parameters for the user clusters via gradient descent.

1644

Autoencoders that don't overfit towards the Identity

In this paper, we consider linear autoencoders, as they facilitate analytic solutions, and first show that denoising / dropout actually prevents the overfitting towards the identity-function only to the degree that it is penalized by the induced L2-norm regularization.

1645

Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond

In this paper we significantly refine this complexity threshold by showing that, in two-player games, an optimal correlated equilibrium can be computed in polynomial time, provided that a certain condition is satisfied.

1646

Parameterized Explainer for Graph Neural Network

In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs.

1647

Recursive Inference for Variational Autoencoders

In this paper, we consider a different approach of building a mixture inference model.

1648

Flexible mean field variational inference using mixtures of non-overlapping exponential families

Yet, I show that using standard mean field variational inference can fail to produce sensible results for models with sparsity-inducing priors, such as the spike-and-slab. Fortunately, such pathological behavior can be remedied as I show that mixtures of exponential family distributions with non-overlapping support form an exponential family.

1649

HYDRA: Pruning Adversarially Robust Neural Networks

To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.

1650

NVAE: A Deep Hierarchical Variational Autoencoder

We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization.

1651

Can Temporal-Di?erence and Q-Learning Learn Representation? A Mean-Field Theory

We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve?

1652

What Do Neural Networks Learn When Trained With Random Labels?

In this paper, we show analytically for convolutional and fully connected networks that an alignment between the principal components of network parameters and data takes place when training with random labels.

1653

Counterfactual Prediction for Bundle Treatment

In this work, we assume the existence of low dimensional latent structure underlying bundle treatment.

1654

Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs

Here, we present BetaE, a probabilistic embedding framework for answering arbitrary FOL queries over KGs.

1655

Learning Disentangled Representations and Group Structure of Dynamical Environments

Inspired by this formalism, we propose a framework, built upon the theory of group representation, for learning representations of a dynamical environment structured around the transformations that generate its evolution.

1656

Learning Linear Programs from Optimal Decisions

We propose a flexible gradient-based framework for learning linear programs from optimal decisions.

1657

Wisdom of the Ensemble: Improving Consistency of Deep Learning Models

This paper studies a model behavior in the context of periodic retraining of deployed models where the outputs from successive generations of the models might not agree on the correct labels assigned to the same input.

1658

Universal Function Approximation on Graphs

In this work we produce a framework for constructing universal function approximators on graph isomorphism classes.

1659

Accelerating Reinforcement Learning through GPU Atari Emulation

We introduce CuLE (CUDA Learning Environment), a CUDA port of the Atari Learning Environment (ALE) which is used for the development of deep reinforcement algorithms.

1660

EvolveGraph: Multi-Agent Trajectory Prediction with Dynamic Relational Reasoning

In this paper, we propose a generic trajectory forecasting framework (named EvolveGraph) with explicit relational structure recognition and prediction via latent interaction graphs among multiple heterogeneous, interactive agents.

1661

Comparator-Adaptive Convex Bandits

We study bandit convex optimization methods that adapt to the norm of the comparator, a topic that has only been studied before for its full-information counterpart.

1662

Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs

We present two elegant solutions for modeling continuous-time dynamics, in a novel model-based reinforcement learning (RL) framework for semi-Markov decision processes (SMDPs), using neural ordinary differential equations (ODEs).

1663

The Adaptive Complexity of Maximizing a Gross Substitutes Valuation

In this paper, we study the adaptive complexity of maximizing a monotone gross substitutes function under a cardinality constraint.

1664

A Robust Functional EM Algorithm for Incomplete Panel Count Data

As a first step, under a missing completely at random assumption (MCAR), we propose a simple yet widely applicable functional EM algorithm to estimate the counting process mean function, which is of central interest to behavioral scientists.

1665

Graph Stochastic Neural Networks for Semi-supervised Learning

To improve the rigidness and inflexibility of deterministic classification functions, this paper proposes a novel framework named Graph Stochastic Neural Networks (GSNN), which aims to model the uncertainty of the classification function by simultaneously learning a family of functions, i.e., a stochastic function.

1666

Compositional Zero-Shot Learning via Fine-Grained Dense Feature Composition

We propose a feature composition framework that learns to extract attribute-based features from training samples and combines them to construct fine-grained features for unseen classes.

1667

A Benchmark for Systematic Generalization in Grounded Language Understanding

In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.

1668

Weston-Watkins Hinge Loss and Ordered Partitions

In this work we introduce a novel discrete loss function for multiclass classification, the ordered partition loss, and prove that the WW-hinge loss is calibrated with respect to this loss.

1669

Reinforcement Learning with Augmented Data

To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.

1670

Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes

Assuming the factorization is known, we propose two model-based algorithms.

1671

Graduated Assignment for Joint Multi-Graph Matching and Clustering with Application to Unsupervised Graph Matching Network Learning

In this paper, we resort to a graduated assignment procedure for soft matching and clustering over iterations, whereby the two-way constraint and clustering confidence are modulated by two separate annealing parameters, respectively.

1672

Estimating Training Data Influence by Tracing Gradient Descent

We introduce a method called TracIn that computes the influence of a training example on a prediction made by the model.

1673

Joint Policy Search for Multi-agent Collaboration with Imperfect Information

In this paper, we show global changes of game values can be decomposed to policy changes localized at each information set, with a novel term named \emph{policy-change density}.

1674

Adversarial Bandits with Corruptions: Regret Lower Bound and No-regret Algorithm

In this paper, we consider an extended setting in which an attacker sits in-between the environment and the learner, and is endowed with a limited budget to corrupt the reward of the selected arm.

1675

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective

To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation.

1676

Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks

We show that this key benefit arises because, at initialization, batch normalization downscales the residual branch relative to the skip connection, by a normalizing factor on the order of the square root of the network depth.

1677

Learning Retrospective Knowledge with Reverse Reinforcement Learning

We present a Reverse Reinforcement Learning (Reverse RL) approach for representing retrospective knowledge.

1678

Dialog without Dialog Data: Learning Visual Dialog Agents from VQA Data

In this work, we study a setting we call "Dialog without Dialog", which requires agents to develop visually grounded dialog models that can adapt to new tasks without language level supervision.

1679

GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs

While existing techniques have primarily focused on obtaining high-quality solutions, scalability to billion-sized graphs has not been adequately addressed. In addition, the impact of a budget-constraint, which is necessary for many practical scenarios, remains to be studied. In this paper, we propose a framework called GCOMB to bridge these gaps.

1680

A General Large Neighborhood Search Framework for Solving Integer Linear Programs

We focus on solving integer programs and ground our approach in the large neighborhood search (LNS) paradigm, which iteratively chooses a subset of variables to optimize while leaving the remainder fixed.

1681

A Theoretical Framework for Target Propagation

We provide a first solution to this problem through a novel reconstruction loss that improves feedback weight training, while simultaneously introducing architectural flexibility by allowing for direct feedback connections from the output to each hidden layer.

1682

OrganITE: Optimal transplant donor organ offering using an individual treatment effect

In this paper, we introduce OrganITE, an organ-to-patient assignment methodology that assigns organs based not only on its own estimates of the potential outcomes but also on organ scarcity.

1683

The Complete Lasso Tradeoff Diagram

To address this important problem, we offer the first complete diagram that distinguishes all pairs of FDR and power that can be asymptotically realized by the Lasso from the remaining pairs, in a regime of linear sparsity under random designs.

1684

On the universality of deep learning

This paper shows that deep learning, i.e., neural networks trained by SGD, can learn in polytime any function class that can be learned in polytime by some algorithm, including parities.

1685

Regression with reject option and application to kNN

We investigate the problem of regression where one is allowed to abstain from predicting. We refer to this framework as regression with reject option as an extension of classification with reject option.

1686

The Primal-Dual method for Learning Augmented Algorithms

In this paper, we extend the primal-dual method for online algorithms in order to incorporate predictions that advise the online algorithm about the next action to take.

1687

FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs

This work focuses on the representation learning question: how can we learn such features?

1688

A Class of Algorithms for General Instrumental Variable Models

In this work, we provide a method for causal effect bounding in continuous distributions, leveraging recent advances in gradient-based methods for the optimization of computationally intractable objective functions.

1689

Black-Box Ripper: Copying black-box models using generative evolutionary algorithms

In this context, we present a teacher-student framework that can distill the black-box (teacher) model into a student model with minimal accuracy loss.

1690

Bayesian Optimization of Risk Measures

We propose a family of novel Bayesian optimization algorithms that exploit the structure of the objective function to substantially improve sampling efficiency.

1691

TorsionNet: A Reinforcement Learning Approach to Sequential Conformer Search

We present TorsionNet, an efficient sequential conformer search technique based on reinforcement learning under the rigid rotor approximation.

1692

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene.

1693

PIE-NET: Parametric Inference of Point Cloud Edges

We introduce an end-to-end learnable technique to robustly identify feature edges in 3D point cloud data.

1694

A Simple Language Model for Task-Oriented Dialogue

SimpleTOD is a simple approach to task-oriented dialogue that uses a single, causal language model trained on all sub-tasks recast as a single sequence prediction problem.

1695

A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval

We analyze continuous-time mirror descent applied to sparse phase retrieval, which is the problem of recovering sparse signals from a set of magnitude-only measurements.

1696

Confidence sequences for sampling without replacement

We present a suite of tools for designing \textit{confidence sequences} (CS) for $\theta^\star$.

1697

A mean-field analysis of two-player zero-sum games

To address this limitation, we parametrize mixed strategies as mixtures of particles, whose positions and weights are updated using gradient descent-ascent.

1698

Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.

1699

Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games

We introduce Pipeline PSRO (P2SRO), the first scalable PSRO-based method for finding approximate Nash equilibria in large zero-sum imperfect-information games.

1700

Improving Sparse Vector Technique with Renyi Differential Privacy

In this paper, we revisit SVT from the lens of Renyi differential privacy, which results in new privacy bounds, new theoretical insight and new variants of SVT algorithms.

1701

Latent Template Induction with Gumbel-CRFs

Specifically, we propose a Gumbel-CRF, a continuous relaxation of the CRF sampling algorithm using a relaxed Forward-Filtering Backward-Sampling (FFBS) approach.

1702

Instance Based Approximations to Profile Maximum Likelihood

In this paper we provide a new efficient algorithm for approximately computing the profile maximum likelihood (PML) distribution, a prominent quantity in symmetric property estimation.

1703

Factorizable Graph Convolutional Networks

In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network (FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph.

1704

Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses

In this work, we introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.

1705

A Study on Encodings for Neural Architecture Search

In this work, we present the first formal study on the effect of architecture encodings for NAS, including a theoretical grounding and an empirical study.

1706

Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising

In this work, we introduce Noise2Same, a novel self-supervised denoising framework.

1707

Early-Learning Regularization Prevents Memorization of Noisy Labels

We propose a novel framework to perform classification via deep learning in the presence of noisy annotations.

1708

LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single Image Super-resolution and Beyond

This paper addresses this pain point by proposing a linearly-assembled pixel-adaptive regression network (LAPAR), which casts the direct LR to HR mapping learning into a linear coefficient regression task over a dictionary of multiple predefined filter bases.

1709

Learning Parities with Neural Networks

In this paper we make a step towards showing leanability of models that are inherently non-linear.

1710

Consistent Plug-in Classifiers for Complex Objectives and Constraints

We present a statistically consistent algorithm for constrained classification problems where the objective (e.g. F-measure, G-mean) and the constraints (e.g. demographic parity, coverage) are defined by general functions of the confusion matrix.

1711

Movement Pruning: Adaptive Sparsity by Fine-Tuning

We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning.

1712

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call initial tickets”), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance.

1713

Online Matrix Completion with Side Information

We give an online algorithm and prove novel mistake and regret bounds for online binary matrix completion with side information.

1714

Position-based Scaled Gradient for Model Quantization and Pruning

We propose the position-based scaled gradient (PSG) that scales the gradient depending on the position of a weight vector to make it more compression-friendly.

1715

Online Learning with Primary and Secondary Losses

We study the problem of online learning with primary and secondary losses.

1716

Graph Information Bottleneck

Here we introduce Graph Information Bottleneck (GIB), an information-theoretic principle that optimally balances expressiveness and robustness of the learned representation of graph-structured data.

1717

The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise

We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on Lp perturbations.

1718

Adaptive Online Estimation of Piecewise Polynomial Trends

We consider the framework of non-stationary stochastic optimization [Besbes et.al. 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied.

1719

RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference

In this paper, we introduce RNNPool, a novel pooling operator based on Recurrent Neural Networks (RNNs), that efficiently aggregates features over large patches of an image and rapidly downsamples activation maps.

1720

Agnostic Learning with Multiple Objectives

Instead, we propose a new framework of \emph{Agnostic Learning with Multiple Objectives} ($\almo$), where a model is optimized for \emph{any} weights in the mixture of base objectives.

1721

3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data

We suggest that ambiguities can be modeled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans.

1722

Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation

In this work, we propose an efficient, cooperative and highly automated framework to simultaneously search for all main components including backbone, segmentation branches, and feature fusion module in a unified panoptic segmentation pipeline based on the prevailing one-shot Network Architecture Search (NAS) paradigm.

1723

Differentiable Top-k with Optimal Transport

To address the issue, we propose a smoothed approximation, namely SOFT (Scalable Optimal transport-based diFferenTiable) top-k operator.

1724

Information-theoretic Task Selection for Meta-Reinforcement Learning

We propose a task selection algorithm based on information theory, which optimizes the set of tasks used for training in meta-RL, irrespectively of how they are generated.

1725

A Limitation of the PAC-Bayes Framework

In this manuscript we present a limitation for the PAC-Bayes framework.

1726

On Completeness-aware Concept-Based Explanations in Deep Neural Networks

In this paper, we study such concept-based explainability for Deep Neural Networks (DNNs).

1727

Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems

In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction.

1728

Why Normalizing Flows Fail to Detect Out-of-Distribution Data

We investigate why normalizing flows perform poorly for OOD detection.

1729

Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay

In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers

1730

Unsupervised Translation of Programming Languages

In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler.

1731

Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation

To this end, we propose a novel Adversarial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner.

1732

Optimally Deceiving a Learning Leader in Stackelberg Games

In this paper, we fill this gap by showing that it is always possible for the follower to efficiently compute (near-)optimal payoffs for various scenarios of learning interaction between the leader and the follower.

1733

Online Optimization with Memory and Competitive Control

This paper presents competitive algorithms for a novel class of online optimization problems with memory.

1734

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method

We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.

1735

Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation

In this paper, we introduce Evolving Graphical Planner (EGP), a module that allows global planning for navigation based on raw sensory input.

1736

Learning from Failure: De-biasing Classifier from Biased Classifier

Based on the obser- vations, we propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.

1737

Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder

In this paper, we make the observation that some of these methods fail when applied to generative models based on Variational Auto-encoders (VAE).

1738

Deep Diffusion-Invariant Wasserstein Distributional Classification

In this paper, we present a novel classification method called deep diffusion-invariant Wasserstein distributional classification (DeepWDC).

1739

Finding All $\epsilon$-Good Arms in Stochastic Bandits

We introduce two algorithms to overcome these and demonstrate their great empirical performance on a large-scale crowd-sourced dataset of $2.2$M ratings collected by the New Yorker Caption Contest as well as a dataset testing hundreds of possible cancer drugs.

1740

Meta-Learning through Hebbian Plasticity in Random Networks

Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent.

1741

A Computational Separation between Private Learning and Online Learning

We show that, assuming the existence of one-way functions, such an efficient conversion is impossible even for general pure-private learners with polynomial sample complexity.

1742

Top-KAST: Top-K Always Sparse Training

In this work we propose Top-KAST, a method that preserves constant sparsity throughout training (in both the forward and backward-passes).

1743

Meta-Learning with Adaptive Hyperparameters

Instead of searching for better task-aware initialization, we focus on a complementary factor in MAML framework, inner-loop optimization (or fast adaptation).

1744

Tight last-iterate convergence rates for no-regret learning in multi-player games

We study the question of obtaining last-iterate convergence rates for no-regret learning algorithms in multi-player games.

1745

Curvature Regularization to Prevent Distortion in Graph Embedding

To address the problem, we propose curvature regularization, to enforce flatness for embedding manifolds, thereby preventing the distortion.

1746

Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability

We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.

1747

Statistical and Topological Properties of Sliced Probability Divergences

In this paper, we aim at bridging this gap and derive various theoretical properties of sliced probability divergences.

1748

Probabilistic Active Meta-Learning

In this work, we introduce task selection based on prior experience into a meta-learning algorithm by conceptualizing the learner and the active meta-learning setting using a probabilistic latent variable model.

1749

Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher

In this paper, we theoretically analyze the knowledge distillation of a wide neural network.

1750

Adversarial Attacks on Deep Graph Matching

This paper proposes an adversarial attack model with two novel attack techniques to perturb the graph structure and degrade the quality of deep graph matching: (1) a kernel density estimation approach is utilized to estimate and maximize node densities to derive imperceptible perturbations, by pushing attacked nodes to dense regions in two graphs, such that they are indistinguishable from many neighbors; and (2) a meta learning-based projected gradient descent method is developed to well choose attack starting points and to improve the search performance for producing effective perturbations.

1751

The Generalization-Stability Tradeoff In Neural Network Pruning

We demonstrate that this “generalization-stability tradeoff” is present across a wide variety of pruning settings and propose a mechanism for its cause: pruning regularizes similarly to noise injection.

1752

Gradient-EM Bayesian Meta-Learning

The key idea behind Bayesian meta-learning is empirical Bayes inference of hierarchical model. In this work, we extend this framework to include a variety of existing methods, before proposing our variant based on gradient-EM algorithm.

1753

Logarithmic Regret Bound in Partially Observable Linear Dynamical Systems

Deploying this estimation method, we propose adaptive control online learning (AdapOn), an efficient reinforcement learning algorithm that adaptively learns the system dynamics and continuously updates its controller through online learning steps.

1754

Linearly Converging Error Compensated SGD

In this paper, we propose a unified analysis of variants of distributed SGD with arbitrary compressions and delayed updates.

1755

Canonical 3D Deformer Maps: Unifying parametric and non-parametric methods for dense weakly-supervised category reconstruction

We propose the Canonical 3D Deformer Map, a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.

1756

A Self-Tuning Actor-Critic Algorithm

In this paper, we take a step towards addressing this issue by using metagradients to automatically adapt hyperparameters online by meta-gradient descent (Xu et al., 2018).

1757

The Cone of Silence: Speech Separation by Localization

At the core of our method is a deep network, in the waveform domain, which isolates sources within an angular region ?±w/2, given an angle of interest ? and angular window size w.

1758

High-Dimensional Bayesian Optimization via Nested Riemannian Manifolds

In this paper, we propose to exploit the geometry of non-Euclidean search spaces, which often arise in a variety of domains, to learn structure-preserving mappings and optimize the acquisition function of BO in low-dimensional latent spaces.

1759

Train-by-Reconnect: Decoupling Locations of Weights from Their Values

To assess our hypothesis, we propose a novel method called lookahead permutation (LaPerm) to train DNNs by reconnecting the weights.

1760

Learning discrete distributions: user vs item-level privacy

We study the fundamental problem of learning discrete distributions over $k$ symbols with user-level differential privacy.

1761

Matrix Completion with Quantified Uncertainty through Low Rank Gaussian Copula

This paper pro- poses a probabilistic and scalable framework for missing value imputation with quantified uncertainty.

1762

Sparse and Continuous Attention Mechanisms

This paper expands that work in two directions: first, we extend alpha-entmax to continuous domains, revealing a link with Tsallis statistics and deformed exponential families. Second, we introduce continuous-domain attention mechanisms, deriving efficient gradient backpropagation algorithms for alpha in {1,2}.

1763

Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection

This paper delves into the \emph{representations} of the above three fundamental elements: quality estimation, classification and localization.

1764

Learning by Minimizing the Sum of Ranked Range

In this work, we introduce the sum of ranked range (SoRR) as a general approach to form learning objectives.

1765

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and develop a theoretically principled policy regularization which can be applied to a large family of DRL algorithms, including deep deterministic policy gradient (DDPG), proximal policy optimization (PPO) and deep Q networks (DQN), for both discrete and continuous action control problems.

1766

Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features

We refine previous investigations of this failure at anomaly detection for invertible generative networks and provide a clear explanation of it as a combination of model bias and domain prior: Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset and these low-level features dominate the likelihood.

1767

Fair Hierarchical Clustering

In this paper we extend this notion to hierarchical clustering, where the goal is to recursively partition the data to optimize a specific objective.

1768

Self-training Avoids Using Spurious Features Under Domain Shift

We identify and analyze one particular setting where the domain shift can be large, but these algorithms provably work: certain spurious features correlate with the label in the source domain but are independent of the label in the target.

1769

Improving Online Rent-or-Buy Algorithms with Sequential Decision Making and ML Predictions

In this work we study online rent-or-buy problems as a sequential decision making problem.

1770

CircleGAN: Generative Adversarial Learning across Spherical Circles

We present a novel discriminator for GANs that improves realness and diversity of generated samples by learning a structured hypersphere embedding space using spherical circles.

1771

WOR and $p$'s: Sketches for $\ell_p$-Sampling Without Replacement

We design novel composable sketches for WOR {\em $\ell_p$ sampling}, weighted sampling of keys according to a power $p\in[0,2]$ of their frequency (or for signed data, sum of updates).

1772

Hypersolvers: Toward Fast Continuous-Depth Models

We introduce hypersolvers, neural networks designed to solve ODEs with low overhead and theoretical guarantees on accuracy.

1773

Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment

In this paper, we propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.

1774

Escaping the Gravitational Pull of Softmax

To circumvent these shortcomings we investigate an alternative transformation, the \emph{escort} mapping, that demonstrates better optimization properties.

1775

Regret in Online Recommendation Systems

This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time.

1776

On Convergence and Generalization of Dropout Training

We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations.

1777

Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking

In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function.

1778

Implicit Regularization in Deep Learning May Not Be Explainable by Norms

The current paper resolves this open question in the negative, by proving that there exist natural matrix factorization problems on which the implicit regularization drives all norms (and quasi-norms) towards infinity.

1779

POMO: Policy Optimization with Multiple Optima for Reinforcement Learning

We introduce Policy Optimization with Multiple Optima (POMO), an end-to-end approach for building such a heuristic solver.

1780

Uncertainty-aware Self-training for Few-shot Text Classification

We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning.

1781

Learning to Learn with Feedback and Local Plasticity

In this study, we employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.

1782

Every View Counts: Cross-View Consistency in 3D Object Detection with Hybrid-Cylindrical-Spherical Voxelization

In this paper, we present a novel framework to unify and leverage the benefits from both BEV and RV.

1783

Sharper Generalization Bounds for Pairwise Learning

In this paper, we provide a refined stability analysis by developing generalization bounds which can be $\sqrt{n}$-times faster than the existing results, where $n$ is the sample size.

1784

A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings

We present a new operator-free, measure-theoretic approach to the conditional mean embedding as a random variable taking values in a reproducing kernel Hilbert space.

1785

Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality

We consider the problem of estimating the Wasserstein distance between the empirical measure and a set of probability measures whose expectations over a class of functions (hypothesis class) are constrained.

1786

Bootstrap Your Own Latent – A New Approach to Self-Supervised Learning

We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning.

1787

Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in Deep Learning

This work aims to provide understandings on this generalization gap by analyzing their local convergence behaviors.

1788

RSKDD-Net: Random Sample-based Keypoint Detector and Descriptor

This paper proposes Random Sample-based Keypoint Detector and Descriptor Network (RSKDD-Net) for large scale point cloud registration.

1789

Efficient Clustering for Stretched Mixtures: Landscape and Optimality

To overcome this issue, we propose a non-convex program seeking for an affine transform to turn the data into a one-dimensional point cloud concentrating around -1 and 1, after which clustering becomes easy.

1790

A Group-Theoretic Framework for Data Augmentation

In this paper, we develop such a framework to explain data augmentation as averaging over the orbits of the group that keeps the data distribution approximately invariant, and show that it leads to variance reduction.

1791

The Statistical Cost of Robust Kernel Hyperparameter Turning

We consider the problem of finding the best interpolant from a class of kernels with unknown hyperparameters, assuming only that the noise is square-integrable.

1792

How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?

This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks’ generalisation ability.

1793

ContraGAN: Contrastive Learning for Conditional Image Generation

In this paper, we propose ContraGAN that considers relations between multiple image embeddings in the same batch (data-to-data relations) as well as the data-to-class relations by using a conditional contrastive loss.

1794

On the distance between two neural networks and the stability of learning

This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions.

1795

A Topological Filter for Learning with Label Noise

To tackle this problem, in this paper, we propose a new method for filtering label noise.

1796

Personalized Federated Learning with Moreau Envelopes

To address this, we propose an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients’ regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL.

1797

Avoiding Side Effects in Complex Environments

In toy environments, Attainable Utility Preservation (AUP) avoided side effects by penalizing shifts in the ability to achieve randomly generated goals. We scale this approach to large, randomly generated environments based on Conway’s Game of Life.

1798

No-regret Learning in Price Competitions under Consumer Reference Effects

We study long-run market stability for repeated price competitions between two firms, where consumer demand depends on firms’ posted prices and consumers’ price expectations called reference prices.

1799

Geometric Dataset Distances via Optimal Transport

In this work we propose an alternative notion of distance between datasets that (i) is model-agnostic, (ii) does not involve training, (iii) can compare datasets even if their label sets are completely disjoint and (iv) has solid theoretical footing.

1800

Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters

We introduce an approach to the identification of kernel hyperparameters in GP regression and related problems that sidesteps the need for costly marginal likelihoods.

1801

A novel variational form of the Schatten-$p$ quasi-norm

Here, we propose and analyze a novel {\it variational form of Schatten-$p$ quasi-norm} which, for the first time in the literature, is defined for any continuous value of $p\in(0,1]$ and decouples along the columns of the factorized matrices.

1802

Energy-based Out-of-distribution Detection

We propose a unified framework for OOD detection that uses an energy score.

1803

On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them

We analyze the influence of adversarial training on the loss landscape of machine learning models.

1804

User-Dependent Neural Sequence Models for Continuous-Time Event Data

In this paper, we extend the broad class of neural marked point process models to mixtures of latent embeddings,where each mixture component models the characteristic traits of a given user.

1805

Active Structure Learning of Causal DAGs via Directed Clique Trees

In this work, we develop a \textit{universal} lower bound for single-node interventions that establishes that the largest clique is \textit{always} a fundamental impediment to structure learning.

1806

Convergence and Stability of Graph Convolutional Networks on Large Random Graphs

We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel.

1807

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques.

1808

Reconsidering Generative Objectives For Counterfactual Reasoning

As a step towards more flexible, scalable and accurate ITE estimation, we present a novel generative Bayesian estimation framework that integrates representation learning, adversarial matching and causal estimation.

1809

Robust Federated Learning: The Case of Affine Distribution Shifts

The primary goal of this paper is to develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users’ samples.

1810

Quantile Propagation for Wasserstein-Approximate Gaussian Processes

We develop a new approximate inference method for Gaussian process models which overcomes the technical challenges arising from abandoning these convenient divergences.

1811

Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning

In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a k-step adjacent region of the current state using an adjacency constraint.

1812

High-contrast ?gaudy? images improve the training of deep neural network models of visual cortex

We propose high-contrast, binarized versions of natural images—termed gaudy images—to efficiently train DNNs to predict higher-order visual cortical responses.

1813

Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion

To address this challenge, we propose a novel regularizer—namely, \textbf{DU}ality-induced \textbf{R}egul\textbf{A}rizer (DURA)—which is not only effective in improving the performance of existing models but widely applicable to various methods.

1814

Distributed Training with Heterogeneous Data: Bridging Median- and Mean-Based Algorithms

To overcome this gap, we provide a novel gradient correction mechanism that perturbs the local gradients with noise, which we show can provably close the gap between mean and median of the gradients.

1815

H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks

Here, we propose Hebbian Memory Networks (H-Mems), a simple neural network model that is built around a core hetero-associative network subject to Hebbian plasticity.

1816

Neural Unsigned Distance Fields for Implicit Function Learning

In this work we target a learnable output representation that allows continuous, high resolution outputs of arbitrary shape.

1817

Curriculum By Smoothing

In this paper, we propose an elegant curriculum-based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.

1818

Fast Transformers with Clustered Attention

To address this, we propose clustered attention, which instead of computing the attention for every query, groups queries into clusters and computes attention just for the centroids.

1819

The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons.

1820

Strongly Incremental Constituency Parsing with Graph Neural Networks

In this paper, we propose a novel transition system called attach-juxtapose.

1821

AOT: Appearance Optimal Transport Based Identity Swapping for Forgery Detection

In this work, we provide a new identity swapping algorithm with large differences in appearance for face forgery detection.

1822

Uncertainty-Aware Learning for Zero-Shot Semantic Segmentation

In this paper, we identify this challenge and address it with a novel framework that learns to discriminate noisy samples based on Bayesian uncertainty estimation.

1823

Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians

In this paper, we diagnose several subtle pathologies in the training of STNs.

1824

First-Order Methods for Large-Scale Market Equilibrium Computation

We develop simple first-order methods suitable for solving these programs for large-scale markets.

1825

Minimax Optimal Nonparametric Estimation of Heterogeneous Treatment Effects

In this paper, we model the HTE as a smooth nonparametric difference between two less smooth baseline functions, and determine the tight statistical limits of the nonparametric HTE estimation as a function of the covariate geometry.

1826

Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis

To overcome the dynamics mismatch, we propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space.

1827

A General Method for Robust Learning from Batches

We develop a general framework of robust learning from batches, and determine the limits of both distribution estimation, and notably, classification, over arbitrary, including continuous, domains.

1828

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning

In this paper we study how to use a different weight for “every” unlabeled example.

1829

Hard Negative Mixing for Contrastive Learning

In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected.

1830

MOReL: Model-Based Offline Reinforcement Learning

In this work, we present MOReL, an algorithmic framework for model-based offline RL.

1831

Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings

Here, we propose local variants and corresponding neural architectures, which consider a subset of the original neighborhood, making them more scalable, and less prone to overfitting.

1832

Adversarial Crowdsourcing Through Robust Rank-One Matrix Completion

We propose a new algorithm combining alternating minimization with extreme-value filtering and provide sufficient and necessary conditions to recover the original rank-one matrix.

1833

Learning Semantic-aware Normalization for Generative Adversarial Networks

In this paper, we propose a novel image synthesis approach by learning Semantic-aware relative importance for feature channels in Generative Adversarial Networks (SariGAN).

1834

Differentiable Causal Discovery from Interventional Data

This work constitutes a new step in this direction by proposing a theoretically-grounded method based on neural networks that can leverage interventional data.

1835

One-sample Guided Object Representation Disassembling

In this paper, we introduce the One-sample Guided Object Representation Disassembling (One-GORD) method, which only requires one annotated sample for each object category to learn disassembled object representation from unannotated images.

1836

Extrapolation Towards Imaginary 0-Nearest Neighbour and Its Improved Convergence Rate

In this paper, we propose a novel multiscale $k$-NN (MS-$k$-NN), that extrapolates unweighted $k$-NN estimators from several $k \ge 1$ values to $k=0$, thus giving an imaginary 0-NN estimator.

1837

Robust Persistence Diagrams using Reproducing Kernels

In this work, we develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using reproducing kernels.

1838

Contextual Games: Multi-Agent Learning with Side Information

By means of kernel-based regularity assumptions, we model the correlation between different contexts and game outcomes and propose a novel online (meta) algorithm that exploits such correlations to minimize the contextual regret of individual players.

1839

Goal-directed Generation of Discrete Structures with Conditional Generative Models

In this paper, we investigate the use of conditional generative models which directly attack this inverse problem, by modeling the distribution of discrete structures given properties of interest.

1840

Beyond Lazy Training for Over-parameterized Tensor Decomposition

In this paper we study a closely related tensor decomposition problem: given an $l$-th order tensor in $(R^d)^{\otimes l}$ of rank $r$ (where $r\ll d$), can variants of gradient descent find a rank $m$ decomposition where $m > r$?

1841

Denoised Smoothing: A Provable Defense for Pretrained Classifiers

We present a method for provably defending any pretrained image classifier against $\ell_p$ adversarial attacks.

1842

Minibatch Stochastic Approximate Proximal Point Methods

To do this, we propose two minibatched algorithms for which we prove a non-asymptotic upper bound on the rate of convergence, revealing a linear speedup in minibatch size.

1843

Attribute Prototype Network for Zero-Shot Learning

To this end, we propose a novel zero-shot representation learning framework that jointly learns discriminative global and local features using only class-level attributes.

1844

CrossTransformers: spatially-aware few-shot transfer

In this work, we illustrate how the neural network representations which underpin modern vision systems are subject to supervision collapse, whereby they lose any information that is not necessary for performing the training task, including information that may be necessary for transfer to new tasks or domains.

1845

Learning Latent Space Energy-Based Prior Model

We propose an energy-based model (EBM) in the latent space of a generator model, so that the EBM serves as a prior model that stands on the top-down network of the generator model.

1846

Learning Long-Term Dependencies in Irregularly-Sampled Time Series

We provide a solution by designing a new algorithm based on the long short-term memory (LSTM) that separates its memory from its time-continuous state.

1847

SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology

To help address this problem, we introduce the Storm EVent ImagRy (SEVIR) dataset – a single, rich dataset that combines spatially and temporally aligned data from multiple sensors, along with baseline implementations of deep learning models and evaluation metrics, to accelerate new algorithmic innovations.

1848

Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation

We propose a novel lightweight generative adversarial network for efficient image manipulation using natural language descriptions.

1849

High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization

We develop effective models that leverage the structure of the search space to enable contextual policy optimization directly from the aggregate rewards using Bayesian optimization.

1850

Model Fusion via Optimal Transport

We present a layer-wise model fusion algorithm for neural networks that utilizes optimal transport to (soft-) align neurons across the models before averaging their associated parameters.

1851

On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems

In this work, we reexamine the effectiveness of RARL under a fundamental robust control setting: the linear quadratic (LQ) case.

1852

Learning Individually Inferred Communication for Multi-Agent Cooperation

To tackle these difficulties, we propose Individually Inferred Communication (I2C), a simple yet effective model to enable agents to learn a prior for agent-agent communication.

1853

Set2Graph: Learning Graphs From Sets

This paper advocates a family of neural network models for learning Set2Graph functions that is both practical and of maximal expressive power (universal), that is, can approximate arbitrary continuous Set2Graph functions over compact sets.

1854

Graph Random Neural Networks for Semi-Supervised Learning on Graphs

In this paper, we propose a simple yet effective framework—GRAPH RANDOM NEURAL NETWORKS (GRAND)—to address these issues.

1855

Gradient Boosted Normalizing Flows

We propose an alternative: Gradient Boosted Normalizing Flows (GBNF) model a density by successively adding new NF components with gradient boosting.

1856

Open Graph Benchmark: Datasets for Machine Learning on Graphs

We present the Open Graph Benchmark (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, and reproducible graph machine learning (ML) research.

1857

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs.

1858

Texture Interpolation for Probing Visual Perception

Here, we show that distributions of deep convolutional neural network (CNN) activations of a texture are well described by elliptical distributions and therefore, following optimal transport theory, constraining their mean and covariance is sufficient to generate new texture samples. Then, we propose the natural geodesics (ie the shortest path between two points) arising with the optimal transport metric to interpolate between arbitrary textures.

1859

Hierarchical Neural Architecture Search for Deep Stereo Matching

In this paper, we propose the first \emph{end-to-end} hierarchical NAS framework for deep stereo matching by incorporating task-specific human knowledge into the neural architecture search framework.

1860

MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models

We present a novel compression algorithm for reducing the storage of LiDAR sensory data streams.

1861

Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy

We provide a detailed asymptotic study of gradient flow trajectories and their implicit optimization bias when minimizing the exponential loss over "diagonal linear networks".

1862

Focus of Attention Improves Information Transfer in Visual Features

In this paper we focus on unsupervised learning for transferring visual information in a truly online setting by using a computational model that is inspired to the principle of least action in physics.

1863

Auditing Differentially Private Machine Learning: How Private is Private SGD?

More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

1864

A Dynamical Central Limit Theorem for Shallow Neural Networks

Here, we analyze the mean-field dynamics as a Wasserstein gradient flow and prove that the deviations from the mean-field evolution scaled by the width, in the width-asymptotic limit, remain bounded throughout training.

1865

Measuring Systematic Generalization in Neural Proof Generation with Transformers

We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language.

1866

Big Self-Supervised Models are Strong Semi-Supervised Learners

A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network.

1867

Learning from Label Proportions: A Mutual Contamination Framework

In this work we address these two issues by posing LLP in terms of mutual contamination models (MCMs), which have recently been applied successfully to study various other weak supervision settings.

1868

Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization

While existing methods typically require O(N^3) computation, we introduce a highly-efficient quadratic-time algorithm for computing K^{1/2}b, K^{-1/2}b, and their derivatives through matrix-vector multiplication (MVMs).

1869

Self-Adaptively Learning to Demoir? from Focused and Defocused Image Pairs

In this paper, we propose a self-adaptive learning method for demoiréing a high-frequency image, with the help of an additional defocused moiré-free blur image.

1870

Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

We develop a robust approach that estimates sharp bounds on the (unidentifiable) value of a given policy in an infinite-horizon problem given data from another policy with unobserved confounding, subject to a sensitivity model.

1871

Model Class Reliance for Random Forests

In this paper we introduce a new technique that extends computation of Model Class Reliance (MCR) to Random Forest classifiers and regressors.

1872

Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games

In this work, we show that when the sequence of loss functions is \emph{predictable}, a simple modification of FTPL which incorporates optimism can achieve better regret guarantees, while retaining the optimal worst-case regret guarantee for unpredictable sequences.

1873

Agnostic $Q$-learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity

We propose a novel recursion-based algorithm and show that if $\delta = O\left(\rho/\sqrt{\dim_E}\right)$, then one can find the optimal policy using $O(\dim_E)$ trajectories, where $\rho$ is the gap between the optimal $Q$-value of the best actions and that of the second-best actions and $\dim_E$ is the Eluder dimension of $\mathcal{F}$.

1874

Learning to Adapt to Evolving Domains

To tackle these challenges, we propose a meta-adaptation framework which enables the learner to adapt to continually evolving target domain without catastrophic forgetting.

1875

Synthesizing Tasks for Block-based Programming

In this paper, we formalize the problem of synthesizing visual programming tasks.

1876

Scalable Belief Propagation via Relaxed Scheduling

In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm.

1877

Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks

We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks’ parameters and architectures.

1878

Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret

We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ).

1879

Learning to Decode: Reinforcement Learning for Decoding of Sparse Graph-Based Channel Codes

We show in this work that reinforcement learning can be successfully applied to decoding short to moderate length sparse graph-based channel codes.

1880

Faster DBSCAN via subsampled similarity queries

In this paper, we propose a simple variant called SNG-DBSCAN, which clusters based on a subsampled $\epsilon$-neighborhood graph, only requires access to similarity queries for pairs of points and in particular avoids any complex data structures which need the embeddings of the data points themselves.

1881

De-Anonymizing Text by Fingerprinting Language Generation

We initiate the study of code security of ML systems by investigating how nucleus sampling—a popular approach for generating text, used for applications such as auto-completion—unwittingly leaks texts typed by users.

1882

Multiparameter Persistence Image for Topological Machine Learning

We introduce a new descriptor for multiparameter persistence, which we call the Multiparameter Persistence Image, that is suitable for machine learning and statistical frameworks, is robust to perturbations in the data, has finer resolution than existing descriptors based on slicing, and can be efficiently computed on data sets of realistic size.

1883

PLANS: Neuro-Symbolic Program Learning from Videos

We introduce PLANS (Program LeArning from Neurally inferred Specifications), a hybrid model for program synthesis from visual observations that gets the best of both worlds, relying on (i) a neural architecture trained to extract abstract, high-level information from each raw individual input (ii) a rule-based system using the extracted information as I/O specifications to synthesize a program capturing the different observations.

1884

Matrix Inference and Estimation in Multi-Layer Models

We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.

1885

MeshSDF: Differentiable Iso-Surface Extraction

Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples with respect to the underlying deep implicit field.

1886

Variational Interaction Information Maximization for Cross-domain Disentanglement

We derive a tractable bound of the objective and propose a generative model named Interaction Information Auto-Encoder (IIAE).

1887

Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning

We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a no-regret tabular RL algorithm.

1888

Faithful Embeddings for Knowledge Base Queries

We address this problem with a novel QE method that is more faithful to deductive reasoning, and show that this leads to better performance on complex queries to incomplete KBs.

1889

Wasserstein Distances for Stereo Disparity Estimation

We address these issues using a new neural network architecture that is capable of outputting arbitrary depth values, and a new loss function that is derived from the Wasserstein distance between the true and the predicted distributions.

1890

Multi-agent Trajectory Prediction with Fuzzy Query Attention

Specifically, we propose a relational model to flexibly model interactions between agents in diverse environments.

1891

Multilabel Classification by Hierarchical Partitioning and Data-dependent Grouping

In this paper we exploit the sparsity of label vectors and the hierarchical structure to embed them in low-dimensional space using label groupings.

1892

An Analysis of SVD for Deep Rotation Estimation

We present a theoretical analysis of SVD as used for projection onto the rotation group.

1893

Can the Brain Do Backpropagation? — Exact Implementation of Backpropagation in Predictive Coding Networks

We propose a BL model that (1) produces \emph{exactly the same} updates of the neural weights as~BP, while (2)~employing local plasticity, i.e., all neurons perform only local computations, done simultaneously.

1894

Manifold GPLVMs for discovering non-Euclidean latent structure in neural data

Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation in an unsupervised way.

1895

Distributed Distillation for On-Device Learning

To overcome these limitations, we introduce a distributed distillation algorithm where devices communicate and learn from soft-decision (softmax) outputs, which are inherently architecture-agnostic and scale only with the number of classes.

1896

COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning

In this paper, we propose a Cooperative hierarchical Transformer (COOT) to leverage this hierarchy information and model the interactions between different levels of granularity and different modalities.

1897

Passport-aware Normalization for Deep Model Protection

To this end, we propose a new passport-aware normalization formulation, which is generally applicable to most existing normalization layers and only needs to add another passport-aware branch for IP protection.

1898

Sampling-Decomposable Generative Adversarial Recommender

Based on these findings, we propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR).

1899

Limits to Depth Efficiencies of Self-Attention

In this paper, we theoretically study the interplay between depth and width in self-attention.


  

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us