diff --git a/docs/api.md b/docs/api.md index 56a27aa7f22..3c2b2c17193 100644 --- a/docs/api.md +++ b/docs/api.md @@ -2,10 +2,12 @@ id: api title: APIs --- -The modular design of Ax enables three different usage modes, with different balances of structure to flexibility and reproducibility. From most lightweight to fullest functionality, they are: - - **Loop API** is intended for synchronous optimization loops, where [trials](glossary.md#trial) can be evaluated right away. With this API, optimization can be executed in a single call and [experiment](glossary.md#experiment) introspection is available once optimization is complete. - - **Service API** can be used as a lightweight service for parameter-tuning applications where trials might be evaluated in parallel and data is available asynchronously (e.g. hyperparameter or simulation optimization). It requires little to no knowledge of Ax data structures and easily integrates with various schedulers. In this mode, Ax suggests one-[arm](glossary.md#arm) trials to be evaluated by the client application, and expects them to be completed with [metric](glossary.md#metric) data when available. - - **Developer API** is for ad-hoc use by data scientists, machine learning engineers, and researchers. The developer API allows for a great deal of customization and introspection, and is recommended for those who plan to use Ax to optimize A/B tests. Using the developer API requires some knowledge of [Ax architecture](core.md). +The modular design of Ax enables three different usage modes, with different balances of structure to flexibility and reproducibility. Navigate to the ["Tutorials" page](/tutorials) for the in-depth walk-throughs of each API and usage mode. From most lightweight to fullest functionality, they are: + - **Loop API** ([tutorial]((/tutorials/gpei_hartmann_loop.html)) is intended for synchronous optimization loops, where [trials](glossary.md#trial) can be evaluated right away. With this API, optimization can be executed in a single call and [experiment](glossary.md#experiment) introspection is available once optimization is complete. **Use this API only for the simplest use cases where running a single trial is fast and only one trial should be running at a time.** + - **Service API** ([tutorial]((/tutorials/gpei_hartmann_service.html)) can be used as a lightweight service for parameter-tuning applications where trials might be evaluated in parallel and data is available asynchronously (e.g. hyperparameter or simulation optimization). It requires little to no knowledge of Ax data structures and easily integrates with various schedulers. In this mode, Ax suggests one-[arm](glossary.md#arm) trials to be evaluated by the client application, and expects them to be completed with [metric](glossary.md#metric) data when available. **This is our most popular API and a good place to start as a new user. Use it to leverage nearly full hyperparameter optimization functionality of Ax without the need to learn its architecture and how things work under the hood.** + - In both the Loop and the Service API, it is possible to configure the optimization algorith via an Ax `GenerationStrategy` ([tutorial]((/tutorials/generation_strategy.html)), so use of Developer API is not required to control the optimization algorithm in Ax. + - **Developer API** ([tutorial]((/tutorials/gpei_hartmann_developer.html)) is for ad-hoc use by data scientists, machine learning engineers, and researchers. The developer API allows for a great deal of customization and introspection, and is recommended for those who plan to use Ax to optimize A/B tests. Using the developer API requires some knowledge of [Ax architecture](core.md). **Use this API if you are looking to perform field experiments with `BatchTrial`-s, customize or contribute to Ax, or leverage advanced functionality that is not exposed in other APIs.** + - While not an API, the **`Scheduler`** ([tutorial]((/tutorials/scheduler.html)) is an important and distinct use-case of the Ax Developer API. With the `Scheduler`, it's possible to run a configurable, managed closed-loop optimization where trials are deployed and polled in an async fashion and no human intervention/oversight is required until the experiment is complete. **Use the `Scheduler` when you are looking to configure and start a full experiment that will need to interact with an external system to evaluate trials.** Here is a comparison of the three APIs in the simple case of evaluating the unconstrained synthetic Branin function: @@ -101,4 +103,24 @@ for i in range(15): best_parameters = best_arm.parameters ``` + +```py +from ax import * +from ax.modelbridge.generation_strategy import GenerationStrategy +from ax.service import Scheduler + +# Full `Experiment` and `GenerationStrategy` instantiation +# omitted for brevity, refer to the "Tutorials" page for detail. +experiment = Experiment(...) +generation_strategy = GenerationStrategy(...) + +scheduler = Scheduler( + experiment=experiment, + generation_strategy=generation_strategy, + options=SchedulerOptions(), # Configurations for how to run the experiment +) + +scheduler.run_n_trials(100) # Automate running 100 trials and reporting results +``` + diff --git a/docs/glossary.md b/docs/glossary.md index 34241cd8852..b8feca43d2e 100644 --- a/docs/glossary.md +++ b/docs/glossary.md @@ -15,6 +15,8 @@ Sequential optimization strategy for finding an optimal [arm](glossary.md#arm) i Function that takes a parameterization and an optional weight as input and outputs a set of metric evaluations ([more details](trial-evaluation.md#evaluation-function)). Used in [simple experiment](glossary.md#simple-experiment) and in the [Loop API](api.md). ### Experiment Object that keeps track of the whole optimization process. Contains a [search space](glossary.md#search-space), [optimization config](glossary.md#optimization-config), and other metadata. [```[Experiment]```](/api/core.html#module-ax.core.experiment) +### Generation strategy +Abstraction that allows to declaratively specify one or multiple models to use in the course of the optimization and automate transition between them (relevant [tutorial](/tutorials/scheduler.html)). [```[GenerationStrategy]```](/api/modelbridge.html#module-ax.modelbridge.generation_strategy) ### Generator run Outcome of a single run of the `gen` method of a [model bridge](glossary.md#model-bridge), contains the generated [arms](glossary.md#arm), as well as possibly best [arm](glossary.md#arm) predictions, other [model](glossary.md#model) predictions, fit times etc. [```[GeneratorRun]```](/api/core.html#module-ax.core.generator_run) ### Metric @@ -37,6 +39,9 @@ Places restrictions on the relationships between [parameters](glossary.md#parame [Outcome constraint](glossary.md#outcome-constraint) evaluated relative to the [status quo](glossary.md#status-quo) instead of directly on the metric value. [```[OutcomeConstraint]```](/api/core.html#module-ax.core.outcome_constraint) ### Runner Dispatch abstraction that defines how a given [trial](glossary.md#trial) is to be run (either locally or by dispatching to an external system). [````[Runner]````](/api/core.html#module-ax.core.runner) +### Scheduler +Configurable closed-loop optimization manager class, capable of conducting a full experiment by deploying trials, polling their results, and leveraging those results to generate and deploy more +trials (relevant [tutorial](/tutorials/scheduler.html)). [````[Scheduler]````](https://ax.dev/versions/latest/api/service.html#module-ax.service.scheduler) ### Search space Continuous, discrete or mixed design space that defines the set of [parameters](glossary.md#parameter) to be tuned in the optimization, and optionally [parameter constraints](glossary.md#parameter-constraint) on these parameters. The parameters of the [arms](glossary.md#arm) to be evaluated in the optimization are drawn from a search space. [```[SearchSpace]```](/api/core.html#module-ax.core.search_space) ### SEM diff --git a/website/pages/tutorials/index.js b/website/pages/tutorials/index.js index 968edfe7360..a124516ec5e 100644 --- a/website/pages/tutorials/index.js +++ b/website/pages/tutorials/index.js @@ -32,7 +32,7 @@ class TutorialHome extends React.Component { examples.

- Our 3 API tutorials:  + **Our 3 API tutorials:**  LoopService, and  Developer — are a @@ -40,7 +40,56 @@ class TutorialHome extends React.Component { constrained Hartmann6 problem, with the Loop API being the simplest to use and the Developer API being the most customizable.

-

Further, our Bayesian Optimization tutorials include:

+

+ **Further, we explore the different components available in Ax in + more detail.** The components explored below serve to set up an + experiment, visualize its results, configure an optimization + algorithm, run an entire experiment in a managed closed loop, and + combine BoTorch components in Ax in a modular way. +

+ + + + + +

Our other Bayesian Optimization tutorials include:

-

- Finally, we explore the different components available in Ax in - more detail, both for setting up the experiment and visualizing - results. -

- - diff --git a/website/tutorials.json b/website/tutorials.json index 237dfc6783d..71d3cd9d8b4 100644 --- a/website/tutorials.json +++ b/website/tutorials.json @@ -13,6 +13,28 @@ "title": "Developer API" } ], + "Deep Dives": [ + { + "id": "building_blocks", + "title": "Building Blocks of Ax" + }, + { + "id": "visualizations", + "title": "Visualizations" + }, + { + "id": "generation_strategy", + "title": "Generation Strategy" + }, + { + "id": "scheduler", + "title": "Scheduler" + }, + { + "id": "modular_botax", + "title": "Modular `BoTorchModel`" + } + ], "Bayesian Optimization": [ { "id": "tune_cnn", @@ -41,15 +63,5 @@ "id": "human_in_the_loop", "title": "Human-in-the-Loop Optimization" } - ], - "Deep Dives": [ - { - "id": "building_blocks", - "title": "Building Blocks of Ax" - }, - { - "id": "visualizations", - "title": "Visualizations" - } ] }