Stars
🔊 Text-Prompted Generative Audio Model
A guidance language for controlling large language models.
A multi-voice TTS system trained with an emphasis on quality
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting…
Anthropic's educational courses
Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
Efficient few-shot learning with Sentence Transformers
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
New ways of breaking app-integrated LLMs
Democratizing Internet-scale financial data.
LUKE -- Language Understanding with Knowledge-based Embeddings
A minimal PyTorch implementation of probabilistic diffusion models for 2D datasets.
DiffusionFastForward: a free course and experimental framework for diffusion-based generative models
PyTorch implementation of VQ-VAE by Aäron van den Oord et al.
minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.
Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)
Scaling Data-Constrained Language Models
Official code for "Large Language Models Are Reasoning Teachers", ACL 2023
Multilingual G2P in 100 languages
KoLLaVA: Korean Large Language-and-Vision Assistant (feat.LLaVA)
ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels (easy/hard) across eight real-life scenarios.
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"
Minimal standalone example of diffusion model
An official implementation of "UnitSpeech: Speaker-adaptive Speech Synthesis with Untranscribed Data"
PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)
Confidence interval computation for evaluation in machine learning using the bootstrapping approach
Apps built using Inspired Cognition's Critique.