Skip to content

Commit

Permalink
Adds abstracts to talks
Browse files Browse the repository at this point in the history
  • Loading branch information
hummat committed Oct 25, 2022
1 parent fa9b658 commit 93878b7
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 7 deletions.
16 changes: 13 additions & 3 deletions _posts/2022-01-02-speakers.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
layout: default
title: Invited Speakers
title: Speakers
---

<table class="table-condensed">
Expand All @@ -22,7 +22,12 @@ title: Invited Speakers
</tr>
<tr>
<td>Talk: <b>Probabilistic and Deep Learning Approaches for Mobile Robots and Automated Driving</b></td>
<td>Talk: <b>Stein methods for parallelized Bayesian inference in perception, state estimation and control</b></td>
<td>Talk: <b>Stein methods for parallelized Bayesian inference in perception, state estimation and control</b>
<details>
<summary>Abstract: &crarr;</summary>
<p>Uncertainty estimation is critical in all levels of robotics systems, from perception to control and sequential decision making. Bayesian inference provides a principled framework for reasoning about uncertainty but the computational cost of computing posteriors can make it impractical for deployment in robots. Fortunately, the recent availability of inexpensive, energy-efficient parallel computing hardware and differentiable programming languages has opened the possibility for the development of Bayesian inference algorithms that leverage parallelism and differentiability of both likelihood functions and priors to estimate complex posteriors. In this talk I will describe a powerful nonparametric inference method that uses both differentiability and parallelism to provide nonparametric posterior approximations in a timely manner. Stein Variational Gradient Descent and its generalizations can be used to formulate Bayesian extensions of common methods in robotics such as ICP for perception, particle filters for state estimation, and model predictive control for decision making. I will show that Stein inference scales better with the dimensionality of the data and can be implemented efficiently on GPUs. Finally, I will discuss extensions of Stein methods for sim2real and the automatic adaptation of simulators to reflect real observations.</p>
</details>
</td>
<td>Talk: <b>Probabilistic Crowd Flows for Socially Aware Navigation</b></td>
</tr>
</tbody>
Expand Down Expand Up @@ -76,7 +81,12 @@ title: Invited Speakers
<tr>
<!--<td style="text-align: center; vertical-align: middle;">Talk: <b>TBD</b></td>-->
<td>Talk: <b>TBD</b></td>
<td>Talk: <b>Plex: Towards Reliability using Pretrained Large Model Extensions</b></td>
<td>Talk: <b>Plex: Towards Reliability using Pretrained Large Model Extensions</b>
<details>
<summary>Abstract: &crarr;</summary>
<p markdown=1>A recent trend in artificial intelligence is the use of pretrained models for language and vision tasks, which have achieved extraordinary performance but also puzzling failures. Probing these models' abilities in diverse ways is therefore critical to the field. I will talk about our recent work exploring the reliability of models, where we define a reliable model as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks involving uncertainty (e.g., selective prediction, open set recognition, calibration under shift), robust generalization (e.g., accuracy and log-likelihood on in- and out-of-distribution datasets), and adaptation (e.g., active learning, few-shot uncertainty). Plex builds on our work on scalable building blocks for probabilistic deep learning such as Gaussian process last-layer and efficient variants of deep ensembles. We show that Plex improves the state-of-the-art across reliability tasks, and simplifies the traditional protocol as it improves the out-of-the-box performance and does not require designing scores or tuning the model for each task. [Paper](https://arxiv.org/abs/2207.07411), [Blog](https://ai.googleblog.com/2022/07/towards-reliability-in-deep-learning.html)</p>
</details>
</td>
<td>Talk: <b>Versatile active learning via focused Bayesian optimization</b></td>
</tr>
</tbody>
Expand Down
18 changes: 14 additions & 4 deletions _site/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
</li>

<li class="page-scroll">
<a href="#02" style="font-size: 12px !important;">Invited Speakers</a>
<a href="#02" style="font-size: 12px !important;">Speakers</a>
</li>

<li class="page-scroll">
Expand Down Expand Up @@ -148,7 +148,7 @@ <h2 style="word-break: break-word;">Updates</h2>
<div class="container">
<div class="row">
<div class="col-lg-12 text-left">
<h2 style="word-break: break-word;">Invited Speakers</h2>
<h2 style="word-break: break-word;">Speakers</h2>
</div>
</div>
<div class="row">
Expand All @@ -172,7 +172,12 @@ <h2 style="word-break: break-word;">Invited Speakers</h2>
</tr>
<tr>
<td>Talk: <b>Probabilistic and Deep Learning Approaches for Mobile Robots and Automated Driving</b></td>
<td>Talk: <b>Stein methods for parallelized Bayesian inference in perception, state estimation and control</b></td>
<td>Talk: <b>Stein methods for parallelized Bayesian inference in perception, state estimation and control</b>
<details>
<summary>Abstract: &crarr;</summary>
<p>Uncertainty estimation is critical in all levels of robotics systems, from perception to control and sequential decision making. Bayesian inference provides a principled framework for reasoning about uncertainty but the computational cost of computing posteriors can make it impractical for deployment in robots. Fortunately, the recent availability of inexpensive, energy-efficient parallel computing hardware and differentiable programming languages has opened the possibility for the development of Bayesian inference algorithms that leverage parallelism and differentiability of both likelihood functions and priors to estimate complex posteriors. In this talk I will describe a powerful nonparametric inference method that uses both differentiability and parallelism to provide nonparametric posterior approximations in a timely manner. Stein Variational Gradient Descent and its generalizations can be used to formulate Bayesian extensions of common methods in robotics such as ICP for perception, particle filters for state estimation, and model predictive control for decision making. I will show that Stein inference scales better with the dimensionality of the data and can be implemented efficiently on GPUs. Finally, I will discuss extensions of Stein methods for sim2real and the automatic adaptation of simulators to reflect real observations.</p>
</details>
</td>
<td>Talk: <b>Probabilistic Crowd Flows for Socially Aware Navigation</b></td>
</tr>
</tbody>
Expand Down Expand Up @@ -226,7 +231,12 @@ <h2 style="word-break: break-word;">Invited Speakers</h2>
<tr>
<!--<td style="text-align: center; vertical-align: middle;">Talk: <b>TBD</b></td>-->
<td>Talk: <b>TBD</b></td>
<td>Talk: <b>Plex: Towards Reliability using Pretrained Large Model Extensions</b></td>
<td>Talk: <b>Plex: Towards Reliability using Pretrained Large Model Extensions</b>
<details>
<summary>Abstract: &crarr;</summary>
<p>A recent trend in artificial intelligence is the use of pretrained models for language and vision tasks, which have achieved extraordinary performance but also puzzling failures. Probing these models’ abilities in diverse ways is therefore critical to the field. I will talk about our recent work exploring the reliability of models, where we define a reliable model as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks involving uncertainty (e.g., selective prediction, open set recognition, calibration under shift), robust generalization (e.g., accuracy and log-likelihood on in- and out-of-distribution datasets), and adaptation (e.g., active learning, few-shot uncertainty). Plex builds on our work on scalable building blocks for probabilistic deep learning such as Gaussian process last-layer and efficient variants of deep ensembles. We show that Plex improves the state-of-the-art across reliability tasks, and simplifies the traditional protocol as it improves the out-of-the-box performance and does not require designing scores or tuning the model for each task. <a href="https://arxiv.org/abs/2207.07411">Paper</a>, <a href="https://ai.googleblog.com/2022/07/towards-reliability-in-deep-learning.html">Blog</a></p>
</details>
</td>
<td>Talk: <b>Versatile active learning via focused Bayesian optimization</b></td>
</tr>
</tbody>
Expand Down

0 comments on commit 93878b7

Please sign in to comment.