Skip to content

janhq/cortex.cpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cortex.cpp

Cortex cpp's Readme Banner

GitHub commit activity Github Last Commit Github Contributors GitHub closed issues Discord

Documentation - API Reference - Changelog - Bug reports - Discord

⚠️ Cortex.cpp is currently in Development. This documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

Overview

Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like Jan.ai.

Cortex.cpp is a multi-engine that uses llama.cpp as the default engine but also supports the following:

To install Cortex.cpp, download the installer for your operating system from the following options:

Version Type Windows MacOS Linux
Stable (Recommended) Download Intel M1/M2/M3/M4 Debian Download Fedora Download

Note: You can also build Cortex.cpp from source by following the steps here.

Libraries

Quickstart

CLI

# 1. Start the Cortex.cpp server (The server is running at localhost:3928)
cortex

# 2. Start a model
cortex run <model_id>:[engine_name]

# 3. Stop a model
cortex stop <model_id>:[engine_name]

# 4. Stop the Cortex.cpp server
cortex stop 

API

  1. Start the API server using cortex command.
  2. Pull a Model
curl --request POST \
  --url http://localhost:3928/v1/models/{model_id}/pull
  1. Start a Model
curl --request POST \
  --url http://localhost:3928/v1/models/{model_id}/start \
  --header 'Content-Type: application/json' \
  --data '{
  "prompt_template": "system\n{system_message}\nuser\n{prompt}\nassistant",
  "stop": [],
  "ngl": 4096,
  "ctx_len": 4096,
  "cpu_threads": 10,
  "n_batch": 2048,
  "caching_enabled": true,
  "grp_attn_n": 1,
  "grp_attn_w": 512,
  "mlock": false,
  "flash_attn": true,
  "cache_type": "f16",
  "use_mmap": true,
  "engine": "llamacpp"
}'
  1. Chat with a Model
curl http://localhost:3928/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
  "model": "",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    },
  ],
  "model": "mistral",
  "stream": true,
  "max_tokens": 1,
  "stop": [
      null
  ],
  "frequency_penalty": 1,
  "presence_penalty": 1,
  "temperature": 1,
  "top_p": 1
}'
  1. Stop a Model
curl --request POST \
  --url http://localhost:3928/v1/models/mistral/stop
  1. Stop the Cortex.cpp server using cortex stop command.

Note: Our API server is fully compatible with the OpenAI API, making it easy to integrate with any systems or tools that support OpenAI-compatible APIs.

Built-in Model Library

Cortex.cpp supports various models available on the Cortex Hub. Once downloaded, all model source files will be stored at C:\Users\<username>\AppData\Local\cortexcpp\models.

Here are example of models that you can use based on each supported engine:

Model llama.cpp
:gguf
TensorRT
:tensorrt
ONNXRuntime
:onnx
Command
llama3.1 cortex run llama3.1:gguf
llama3 cortex run llama3
mistral cortex run mistral
qwen2 cortex run qwen2:7b-gguf
codestral cortex run codestral:22b-gguf
command-r cortex run command-r:35b-gguf
gemma cortex run gemma
mixtral cortex run mixtral:7x8b-gguf
openhermes-2.5 cortex run openhermes-2.5
phi3 (medium) cortex run phi3:medium
phi3 (mini) cortex run phi3:mini
tinyllama cortex run tinyllama:1b-gguf

Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.

Cortex.cpp CLI Commands

For complete details on CLI commands, please refer to our CLI documentation.

REST API

Cortex.cpp includes a REST API accessible at localhost:3928. For a complete list of endpoints and their usage, visit our API documentation.

Uninstallation

Windows

  1. Navigate to Add or Remove program.
  2. Search for Cortex.cpp.
  3. Click Uninstall.
  4. Delete the Cortex.cpp data folder located in your home folder.

MacOs

Run the uninstaller script:

sudo sh cortex-uninstall.sh

Note: The script requires sudo permission.

Linux

sudo apt remove cortexcpp

Alternate Installation

We also provide Beta and Nightly version.

Version Type Windows MacOS Linux
Beta Build cortexcpp.exe Intel M1/M2/M3/M4 cortexcpp.deb cortexcpp.AppImage
Nightly Build cortexcpp.exe Intel M1/M2/M3/M4 cortexcpp.deb cortexcpp.AppImage

Build from Source

Windows

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine > vcpkg folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
  1. Build the Cortex.cpp inside the build folder:
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
  1. Use Visual Studio with the C++ development kit to build the project using the files generated in the build folder.
  2. Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h

MacOS

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine > vcpkg folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Use Visual Studio with the C++ development kit to build the project using the files generated in the build folder.
  2. Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h

Linux

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine > vcpkg folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Use Visual Studio with the C++ development kit to build the project using the files generated in the build folder.
  2. Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h

Contact Support