Skip to content
/ aim Public
forked from aimhubio/aim

Aim πŸ’« β€” easy-to-use and performant open-source ML experiment tracker.

License

Notifications You must be signed in to change notification settings

2-5/aim

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

An easy-to-use & supercharged open-source experiment tracker

Aim logs your training runs, enables a beautiful UI to compare them and an API to query them programmatically.

About β€’ Features β€’ Demos β€’ Examples β€’ Quick Start β€’ Documentation β€’ Roadmap β€’ Slack Community β€’ Twitter

Platform Support PyPI - Python Version PyPI Package License PyPI Downloads Issues

Integrates seamlessly with your favorite tools



About Aim

Track and version ML runs Visualize runs via beautiful UI Query runs metadata via SDK

Aim is an open-source, self-hosted ML experiment tracking tool. It's good at tracking lots (1000s) of training runs and it allows you to compare them with a performant and beautiful UI.

You can use not only the great Aim UI but also its SDK to query your runs' metadata programmatically. That's especially useful for automations and additional analysis on a Jupyter Notebook.

Aim's mission is to democratize AI dev tools.

Why use Aim?

Compare 100s of runs in a few clicks - build models faster

  • Compare, group and aggregate 100s of metrics thanks to effective visualizations.
  • Analyze, learn correlations and patterns between hparams and metrics.
  • Easy pythonic search to query the runs you want to explore.

Deep dive into details of each run for easy debugging

  • Hyperparameters, metrics, images, distributions, audio, text - all available at hand on an intuitive UI to understand the performance of your model.
  • Easily track plots built via your favourite visualisation tools, like plotly and matplotlib.
  • Analyze system resource usage to effectively utilize computational resources.

Have all relevant information organised and accessible for easy governance

  • Centralized dashboard to holistically view all your runs, their hparams and results.
  • Use SDK to query/access all your runs and tracked metadata.
  • You own your data - Aim is open source and self hosted.

Demos

Machine translation lightweight-GAN
Training logs of a neural translation model(from WMT'19 competition). Training logs of 'lightweight' GAN, proposed in ICLR 2021.
FastSpeech 2 Simple MNIST
Training logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech". Simple MNIST training logs.

Quick Start

Follow the steps below to get started with Aim.

1. Install Aim on your training environment

pip3 install aim

2. Integrate Aim with your code

from aim import Run

# Initialize a new run
run = Run()

# Log run parameters
run["hparams"] = {
    "learning_rate": 0.001,
    "batch_size": 32,
}

# Log metrics
for i in range(10):
    run.track(i, name='loss', step=i, context={ "subset":"train" })
    run.track(i, name='acc', step=i, context={ "subset":"train" })

See the full list of supported trackable objects(e.g. images, text, etc) here.

3. Run the training as usual and start Aim UI

aim up

4. Or query runs programmatically via SDK

from aim import Repo

my_repo = Repo('/path/to/aim/repo')

query = "metric.name == 'loss'" # Example query

# Get collection of metrics
for run_metrics_collection in my_repo.query_metrics(query).iter_runs():
    for metric in run_metrics_collection:
        # Get run params
        params = metric.run[...]
        # Get metric values
        steps, metric_values = metric.values.sparse_numpy()

Integrations

Integrate PyTorch Lightning
from aim.pytorch_lightning import AimLogger

# ...
trainer = pl.Trainer(logger=AimLogger(experiment='experiment_name'))
# ...

See documentation here.

Integrate Hugging Face
from aim.hugging_face import AimCallback

# ...
aim_callback = AimCallback(repo='/path/to/logs/dir', experiment='mnli')
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    callbacks=[aim_callback],
    # ...
)
# ...

See documentation here.

Integrate Keras & tf.keras
import aim

# ...
model.fit(x_train, y_train, epochs=epochs, callbacks=[
    aim.keras.AimCallback(repo='/path/to/logs/dir', experiment='experiment_name')
    
    # Use aim.tensorflow.AimCallback in case of tf.keras
    aim.tensorflow.AimCallback(repo='/path/to/logs/dir', experiment='experiment_name')
])
# ...

See documentation here.

Integrate KerasTuner
from aim.keras_tuner import AimCallback

# ...
tuner.search(
    train_ds,
    validation_data=test_ds,
    callbacks=[AimCallback(tuner=tuner, repo='.', experiment='keras_tuner_test')],
)
# ...

See documentation here.

Integrate XGBoost
from aim.xgboost import AimCallback

# ...
aim_callback = AimCallback(repo='/path/to/logs/dir', experiment='experiment_name')
bst = xgb.train(param, xg_train, num_round, watchlist, callbacks=[aim_callback])
# ...

See documentation here.

Integrate CatBoost
from aim.catboost import AimLogger

# ...
model.fit(train_data, train_labels, log_cout=AimLogger(loss_function='Logloss'), logging_level="Info")
# ...

See documentation here.

Integrate fastai
from aim.fastai import AimCallback

# ...
learn = cnn_learner(dls, resnet18, pretrained=True,
                    loss_func=CrossEntropyLossFlat(),
                    metrics=accuracy, model_dir="/tmp/model/",
                    cbs=AimCallback(repo='.', experiment='fastai_test'))
# ...

See documentation here.

Integrate LightGBM
from aim.lightgbm import AimCallback

# ...
aim_callback = AimCallback(experiment='lgb_test')
aim_callback.experiment['hparams'] = params

gbm = lgb.train(params,
                lgb_train,
                num_boost_round=20,
                valid_sets=lgb_eval,
                callbacks=[aim_callback, lgb.early_stopping(stopping_rounds=5)])
# ...

See documentation here.

Integrate PyTorch Ignite
from aim.pytorch_ignite import AimLogger

# ...
aim_logger = AimLogger()

aim_logger.log_params({
    "model": model.__class__.__name__,
    "pytorch_version": str(torch.__version__),
    "ignite_version": str(ignite.__version__),
})

aim_logger.attach_output_handler(
    trainer,
    event_name=Events.ITERATION_COMPLETED,
    tag="train",
    output_transform=lambda loss: {'loss': loss}
)
# ...

See documentation here.

Comparisons to familiar tools

Tensorboard

Training run comparison

Order of magnitude faster training run comparison with Aim

  • The tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.
  • With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params. TensorBoard doesn't have features to group, aggregate the metrics

Scalability

  • Aim is built to handle 1000s of training runs - both on the backend and on the UI.
  • TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.

Beloved TB visualizations to be added on Aim

  • Embedding projector.
  • Neural network visualization.

MLFlow

MLFlow is an end-to-end ML Lifecycle tool. Aim is focused on training tracking. The main differences of Aim and MLflow are around the UI scalability and run comparison features.

Run comparison

  • Aim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.
  • MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.

UI Scalability

  • Aim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!
  • MLflow UI becomes slow to use when there are a few hundreds of runs.

Weights and Biases

Hosted vs self-hosted

  • Weights and Biases is a hosted closed-source MLOps platform.
  • Aim is self-hosted, free and open-source experiment tracking tool.

Roadmap

Detailed Sprints

❇️ The Aim product roadmap

  • The Backlog contains the issues we are going to choose from and prioritize weekly
  • The issues are mainly prioritized by the highly-requested features

High-level roadmap

The high-level features we are going to work on the next few months

Done

  • Live updates (Shipped: Oct 18 2021)
  • Images tracking and visualization (Start: Oct 18 2021, Shipped: Nov 19 2021)
  • Distributions tracking and visualization (Start: Nov 10 2021, Shipped: Dec 3 2021)
  • Jupyter integration (Start: Nov 18 2021, Shipped: Dec 3 2021)
  • Audio tracking and visualization (Start: Dec 6 2021, Shipped: Dec 17 2021)
  • Transcripts tracking and visualization (Start: Dec 6 2021, Shipped: Dec 17 2021)
  • Plotly integration (Start: Dec 1 2021, Shipped: Dec 17 2021)
  • Colab integration (Start: Nov 18 2021, Shipped: Dec 17 2021)
  • Centralized tracking server (Start: Oct 18 2021, Shipped: Jan 22 2022)
  • Tensorboard adaptor - visualize TensorBoard logs with Aim (Start: Dec 17 2021, Shipped: Feb 3 2022)
  • Track git info, env vars, CLI arguments, dependencies (Start: Jan 17 2022, Shipped: Feb 3 2022)
  • MLFlow adaptor (visualize MLflow logs with Aim) (Start: Feb 14 2022, Shipped: Feb 22 2022)
  • Activeloop Hub integration (Start: Feb 14 2022, Shipped: Feb 22 2022)
  • PyTorch-Ignite integration (Start: Feb 14 2022, Shipped: Feb 22 2022)
  • Run summary and overview info(system params, CLI args, git info, ...) (Start: Feb 14 2022, Shipped: Mar 9 2022)
  • Add DVC related metadata into aim run (Start: Mar 7 2022, Shipped: Mar 26 2022)
  • Ability to attach notes to Run from UI (Start: Mar 7 2022, Shipped: Apr 29 2022)
  • Fairseq integration (Start: Mar 27 2022, Shipped: Mar 29 2022)
  • LightGBM integration (Start: Apr 14 2022, Shipped: May 17 2022)
  • CatBoost integration (Start: Apr 20 2022, Shipped: May 17 2022)
  • Run execution details(display stdout/stderr logs) (Start: Apr 25 2022, Shipped: May 17 2022)
  • Long sequences(up to 5M of steps) support (Start: Apr 25 2022, Shipped: Jun 22 2022)
  • Figures Explorer (Start: Mar 1 2022, Shipped: Aug 21 2022)
  • Notify on stuck runs (Start: Jul 22 2022, Shipped: Aug 21 2022)
  • Integration with KerasTuner (Start: Aug 10 2022, Shipped: Aug 21 2022)
  • Integration with WandB (Start: Aug 15 2022, Shipped: Aug 21 2022)
  • Stable remote tracking server (Start: Jun 15 2022, Shipped: Aug 21 2022)
  • Integration with fast.ai (Start: Aug 22 2022, Shipped: Oct 6 2022)
  • Integration with MXNet (Start: Sep 20 2022, Shipped: Oct 6 2022)
  • Project overview page (Start: Sep 1 2022, Shipped: Oct 6 2022)

In Progress

  • Remote tracking server scaling (Start: Sep 1 2022)
  • Aim SDK low-level interface (Start: Aug 22 2022)

To Do

Aim UI

  • Runs management
    • Runs explorer – query and visualize runs data(images, audio, distributions, ...) in a central dashboard
  • Explorers
    • Audio Explorer
    • Text Explorer
    • Distributions Explorer
  • Dashboards – customizable layouts with embedded explorers

SDK and Storage

  • Scalability
    • Smooth UI and SDK experience with over 10.000 runs
  • Runs management
    • CLI interfaces
      • Reporting - runs summary and run details in a CLI compatible format
      • Manipulations – copy, move, delete runs, params and sequences

Integrations

  • ML Frameworks:
    • Shortlist: MONAI, SpaCy, Raytune, PaddlePaddle
  • Datasets versioning tools
    • Shortlist: HuggingFace Datasets
  • Resource management tools
    • Shortlist: Kubeflow, Slurm
  • Workflow orchestration tools
  • Others: Hydra, Google MLMD, Streamlit, ...

On hold

  • scikit-learn integration
  • Cloud storage support – store runs blob(e.g. images) data on the cloud (Start: Mar 21 2022)
  • Artifact storage – store files, model checkpoints, and beyond (Start: Mar 21 2022)

Community

If you have questions

  1. Read the docs
  2. Open a feature request or report a bug
  3. Join our slack

About

Aim πŸ’« β€” easy-to-use and performant open-source ML experiment tracker.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 64.3%
  • Python 25.8%
  • SCSS 7.2%
  • Cython 1.6%
  • JavaScript 0.6%
  • CSS 0.2%
  • Other 0.3%