Block or Report
Block or report jiangxiaoyu1224
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
A finite element framework for Python's scientific stack: arbitrary order planar/curvilinear mesh generation and finite element methods for linear and nonlinear analysis of coupled multiphysics pro…
This repository holds the code for the Python implementation of YOLOX-ViT. Furthermore, it has the implementation of the Knowledge Distillation (KD) method, evaluation metrics of the object detecto…
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)
PaliGemma FineTuning
This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
Pytorch implementation of various Knowledge Distillation (KD) methods.
Grounding DINO with Segment Anything & Stable Diffusion colab
[CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Segment Anything in Medical Images
A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
PyTorch code and models for the DINOv2 self-supervised learning method.
GPT4V-level open-source multi-modal model based on Llama3-8B
EVA Series: Visual Representation Fantasies from BAAI
Knowledge distillation in text classification with pytorch. 知识蒸馏,中文文本分类,教师模型BERT、XLNET,学生模型biLSTM。
[CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"
OpenMMLab Pre-training Toolbox and Benchmark
'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)
General technology for enabling AI capabilities w/ LLMs and MLLMs
Official code for "Large Language Models Are Reasoning Teachers", ACL 2023
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。