-
Enlaps
- Nantes, France
- https://laassairiabdellah.com/
Stars
aequilibrae - Python package for transportation modeling
Dispatch and distribute your ML training to "serverless" clusters in Python, like PyTorch for ML infra. Iterable, debuggable, multi-cloud/on-prem, identical across research and production.
A fast image processing library with low memory needs.
Visualizer for neural network, deep learning and machine learning models
A Python package for segmenting geospatial data with the Segment Anything Model (SAM)
A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Train…
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM
This is originally a collection of papers on neural network accelerators. Now it's more like my selection of research on deep learning and computer architecture.
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (p…
A curated list of awesome knowledge distillation papers and codes for object detection.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
regislebrun / openturns
Forked from openturns/openturnsUncertainty library in C++/Python
Tensor Approximation Package: a Python package for the approximation of functions and tensors.
Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
Code release for paper "Incremental Learning of Object Detectors without Catastrophic Forgetting"
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
The quantitative performance comparison among DL compilers on CNN models.
Google Research
Open-source code for VIPER -- Volume Invariant Position-based Elastic Rods