Skip to content

Commit

Permalink
Add the new tutorials to the tutorials page and add relevant info to …
Browse files Browse the repository at this point in the history
…other docs (#596)

Summary: Pull Request resolved: #596

Reviewed By: Balandat

Differential Revision: D29141303

fbshipit-source-id: 2f0be426905d3d40efff4335e8a73e3bda8e8336
  • Loading branch information
lena-kashtelyan authored and facebook-github-bot committed Jun 15, 2021
1 parent 13960e8 commit fd7d39c
Show file tree
Hide file tree
Showing 4 changed files with 106 additions and 32 deletions.
30 changes: 26 additions & 4 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@
id: api
title: APIs
---
The modular design of Ax enables three different usage modes, with different balances of structure to flexibility and reproducibility. From most lightweight to fullest functionality, they are:
- **Loop API** is intended for synchronous optimization loops, where [trials](glossary.md#trial) can be evaluated right away. With this API, optimization can be executed in a single call and [experiment](glossary.md#experiment) introspection is available once optimization is complete.
- **Service API** can be used as a lightweight service for parameter-tuning applications where trials might be evaluated in parallel and data is available asynchronously (e.g. hyperparameter or simulation optimization). It requires little to no knowledge of Ax data structures and easily integrates with various schedulers. In this mode, Ax suggests one-[arm](glossary.md#arm) trials to be evaluated by the client application, and expects them to be completed with [metric](glossary.md#metric) data when available.
- **Developer API** is for ad-hoc use by data scientists, machine learning engineers, and researchers. The developer API allows for a great deal of customization and introspection, and is recommended for those who plan to use Ax to optimize A/B tests. Using the developer API requires some knowledge of [Ax architecture](core.md).
The modular design of Ax enables three different usage modes, with different balances of structure to flexibility and reproducibility. Navigate to the ["Tutorials" page](/tutorials) for the in-depth walk-throughs of each API and usage mode. From most lightweight to fullest functionality, they are:
- **Loop API** ([tutorial]((/tutorials/gpei_hartmann_loop.html)) is intended for synchronous optimization loops, where [trials](glossary.md#trial) can be evaluated right away. With this API, optimization can be executed in a single call and [experiment](glossary.md#experiment) introspection is available once optimization is complete. **Use this API only for the simplest use cases where running a single trial is fast and only one trial should be running at a time.**
- **Service API** ([tutorial]((/tutorials/gpei_hartmann_service.html)) can be used as a lightweight service for parameter-tuning applications where trials might be evaluated in parallel and data is available asynchronously (e.g. hyperparameter or simulation optimization). It requires little to no knowledge of Ax data structures and easily integrates with various schedulers. In this mode, Ax suggests one-[arm](glossary.md#arm) trials to be evaluated by the client application, and expects them to be completed with [metric](glossary.md#metric) data when available. **This is our most popular API and a good place to start as a new user. Use it to leverage nearly full hyperparameter optimization functionality of Ax without the need to learn its architecture and how things work under the hood.**
- In both the Loop and the Service API, it is possible to configure the optimization algorith via an Ax `GenerationStrategy` ([tutorial]((/tutorials/generation_strategy.html)), so use of Developer API is not required to control the optimization algorithm in Ax.
- **Developer API** ([tutorial]((/tutorials/gpei_hartmann_developer.html)) is for ad-hoc use by data scientists, machine learning engineers, and researchers. The developer API allows for a great deal of customization and introspection, and is recommended for those who plan to use Ax to optimize A/B tests. Using the developer API requires some knowledge of [Ax architecture](core.md). **Use this API if you are looking to perform field experiments with `BatchTrial`-s, customize or contribute to Ax, or leverage advanced functionality that is not exposed in other APIs.**
- While not an API, the **`Scheduler`** ([tutorial]((/tutorials/scheduler.html)) is an important and distinct use-case of the Ax Developer API. With the `Scheduler`, it's possible to run a configurable, managed closed-loop optimization where trials are deployed and polled in an async fashion and no human intervention/oversight is required until the experiment is complete. **Use the `Scheduler` when you are looking to configure and start a full experiment that will need to interact with an external system to evaluate trials.**

Here is a comparison of the three APIs in the simple case of evaluating the unconstrained synthetic Branin function:

Expand Down Expand Up @@ -101,4 +103,24 @@ for i in range(15):
best_parameters = best_arm.parameters
```

<!--Scheduler-->
```py
from ax import *
from ax.modelbridge.generation_strategy import GenerationStrategy
from ax.service import Scheduler

# Full `Experiment` and `GenerationStrategy` instantiation
# omitted for brevity, refer to the "Tutorials" page for detail.
experiment = Experiment(...)
generation_strategy = GenerationStrategy(...)

scheduler = Scheduler(
experiment=experiment,
generation_strategy=generation_strategy,
options=SchedulerOptions(), # Configurations for how to run the experiment
)

scheduler.run_n_trials(100) # Automate running 100 trials and reporting results
```

<!--END_DOCUSAURUS_CODE_TABS-->
5 changes: 5 additions & 0 deletions docs/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ Sequential optimization strategy for finding an optimal [arm](glossary.md#arm) i
Function that takes a parameterization and an optional weight as input and outputs a set of metric evaluations ([more details](trial-evaluation.md#evaluation-function)). Used in [simple experiment](glossary.md#simple-experiment) and in the [Loop API](api.md).
### Experiment
Object that keeps track of the whole optimization process. Contains a [search space](glossary.md#search-space), [optimization config](glossary.md#optimization-config), and other metadata. [```[Experiment]```](/api/core.html#module-ax.core.experiment)
### Generation strategy
Abstraction that allows to declaratively specify one or multiple models to use in the course of the optimization and automate transition between them (relevant [tutorial](/tutorials/scheduler.html)). [```[GenerationStrategy]```](/api/modelbridge.html#module-ax.modelbridge.generation_strategy)
### Generator run
Outcome of a single run of the `gen` method of a [model bridge](glossary.md#model-bridge), contains the generated [arms](glossary.md#arm), as well as possibly best [arm](glossary.md#arm) predictions, other [model](glossary.md#model) predictions, fit times etc. [```[GeneratorRun]```](/api/core.html#module-ax.core.generator_run)
### Metric
Expand All @@ -37,6 +39,9 @@ Places restrictions on the relationships between [parameters](glossary.md#parame
[Outcome constraint](glossary.md#outcome-constraint) evaluated relative to the [status quo](glossary.md#status-quo) instead of directly on the metric value. [```[OutcomeConstraint]```](/api/core.html#module-ax.core.outcome_constraint)
### Runner
Dispatch abstraction that defines how a given [trial](glossary.md#trial) is to be run (either locally or by dispatching to an external system). [````[Runner]````](/api/core.html#module-ax.core.runner)
### Scheduler
Configurable closed-loop optimization manager class, capable of conducting a full experiment by deploying trials, polling their results, and leveraging those results to generate and deploy more
trials (relevant [tutorial](/tutorials/scheduler.html)). [````[Scheduler]````](https://ax.dev/versions/latest/api/service.html#module-ax.service.scheduler)
### Search space
Continuous, discrete or mixed design space that defines the set of [parameters](glossary.md#parameter) to be tuned in the optimization, and optionally [parameter constraints](glossary.md#parameter-constraint) on these parameters. The parameters of the [arms](glossary.md#arm) to be evaluated in the optimization are drawn from a search space. [```[SearchSpace]```](/api/core.html#module-ax.core.search_space)
### SEM
Expand Down
71 changes: 53 additions & 18 deletions website/pages/tutorials/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,64 @@ class TutorialHome extends React.Component {
examples.
</p>
<p>
Our 3 API tutorials:&nbsp;
**Our 3 API tutorials:**&nbsp;
<a href="gpei_hartmann_loop.html">Loop</a>,&nbsp;
<a href="gpei_hartmann_service.html">Service</a>, and&nbsp;
<a href="gpei_hartmann_developer.html">Developer</a> &mdash; are a
good place to start. Each tutorial showcases optimization on a
constrained Hartmann6 problem, with the Loop API being the
simplest to use and the Developer API being the most customizable.
</p>
<p>Further, our Bayesian Optimization tutorials include:</p>
<p>
**Further, we explore the different components available in Ax in
more detail.** The components explored below serve to set up an
experiment, visualize its results, configure an optimization
algorithm, run an entire experiment in a managed closed loop, and
combine BoTorch components in Ax in a modular way.
</p>
<ul>
<li>
<a href="building_blocks.html">Building Blocks of Ax</a>&nbsp;
examines the architecture of Ax and the
experimentation/optimization process.
</li>
</ul>
<ul>
<li>
<a href="visualizations.html">Visualizations</a>&nbsp;
illustrates the different plots available to view and understand
your results.
</li>
</ul>
<ul>
<li>
<a href="generation_strategy.html">GenerationStrategy</a>&nbsp;
steps through setting up a ```GenerationStrategy``` as a way to
specify the algorithm (or multiple), through which points will
be suggested during the optimization. A ```GenerationStrategy```
is an important component of Service API and ```Scheduler```.
</li>
</ul>
<ul>
<li>
<a href="scheduler.html">Scheduler</a>&nbsp; demonstrates an
example of a managed and configurable closed-loop optimization,
conducted in an asyncronous fashion. ```Scheduler``` is a
manager abstraction in Ax that deploys trials, polls them, and
uses their results to produce more trials.
</li>
</ul>
<ul>
<li>
<a href="modular_botax.html">Modular ```BoTorchModel```</a>
&nbsp; walks though a new beta-feature &mdash; an improved
interface between Ax and [BoTorch](botorch.org) &mdash; which
allows for combining arbitrary BoTorch components like
```AcquisitionFunction```, ```Model```,
```AcquisitionObjective``` etc. into a single ```Model``` in Ax.
</li>
</ul>
<p>Our other Bayesian Optimization tutorials include:</p>
<ul>
<li>
<a href="tune_cnn.html">
Expand All @@ -55,8 +104,8 @@ class TutorialHome extends React.Component {
<a href="raytune_pytorch_cnn.html">
Hyperparameter Optimization via Raytune
</a>
&nbsp; provides an example of parallelized hyperparameter optimization
using Ax + Raytune.
&nbsp; provides an example of parallelized hyperparameter
optimization using Ax + Raytune.
</li>
</ul>
<ul>
Expand Down Expand Up @@ -104,20 +153,6 @@ class TutorialHome extends React.Component {
optimization in real-time.
</li>
</ul>
<p>
Finally, we explore the different components available in Ax in
more detail, both for setting up the experiment and visualizing
results.
</p>
<ul>
<a href="building_blocks.html">Building Blocks of Ax</a>&nbsp;
examines the architecture of Ax and the
experimentation/optimization process.
</ul>
<ul>
<a href="visualizations.html">Visualizations</a>&nbsp; illustrates
the different plots available to view and understand your results.
</ul>
</div>
</Container>
</div>
Expand Down
32 changes: 22 additions & 10 deletions website/tutorials.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,28 @@
"title": "Developer API"
}
],
"Deep Dives": [
{
"id": "building_blocks",
"title": "Building Blocks of Ax"
},
{
"id": "visualizations",
"title": "Visualizations"
},
{
"id": "generation_strategy",
"title": "Generation Strategy"
},
{
"id": "scheduler",
"title": "Scheduler"
},
{
"id": "modular_botax",
"title": "Modular `BoTorchModel`"
}
],
"Bayesian Optimization": [
{
"id": "tune_cnn",
Expand Down Expand Up @@ -41,15 +63,5 @@
"id": "human_in_the_loop",
"title": "Human-in-the-Loop Optimization"
}
],
"Deep Dives": [
{
"id": "building_blocks",
"title": "Building Blocks of Ax"
},
{
"id": "visualizations",
"title": "Visualizations"
}
]
}

0 comments on commit fd7d39c

Please sign in to comment.