Skip to content

cjnama/xturing

 
 

Repository files navigation

Stochastic.ai Stochastic.ai

Efficient, fast, and simple fine-tuning of LLM models


xturing is a python package to perform efficient fine-tuning of LLM models like LLaMA, GPT-J, GPT-2 and more. It supports both single GPU and multi-GPU training. Leverage efficient fine-tuning techniques like LoRA to reduce your hardware costs by up to 90% and train your models in a fraction of the time.


⚙️ Installation

pip install xturing

🚀 Quickstart

from xturing.datasets import InstructionDataset
from xturing.models import BaseModel

# Load the dataset
instruction_dataset = InstructionDataset("./alpaca_data")

# Initialize the model
model = BaseModel.create("llama_lora")

# Finetune the model
model.finetune(dataset=instruction_dataset)

# Perform inference
output = model.generate(texts=["Why LLM models are becoming so important?"])

print("Generated output by the model: {}".format(output))

You can find the data folder here.


📚 Tutorials


📈 Roadmap

  • Support for LLaMA, GPT-J, GPT-2
  • Support for Stable Diffusion
  • Dataset generation using self-instruction
  • Evaluation of LLM models

🤝 Help and Support

If you have any questions, you can create an issue on this repository.

You can also join our Discord server and start a discussion in the #xturing channel.


📝 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


🌎 Contributing

As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features and better documentation. Please read our contributing guide to learn how you can get involved.

About

Build and control your own LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%