No description
Find a file
Andrew Phillips 436ce752da
Support more common tool variables in templates (tools, message.tool_calls) (#308)
* Add non-JSON version of `tools` and `functions` to `template_vars`.

Increase the compatibility with VLLM templates which use a non-JSON tools object.

* Add list of tool template variables to the documentation

* Use Jinja templates to provide `tools_json` and `functions_json`

This should be functionally equivelant, but the JSON won't be produced
unless it's needed.

* Make message.tool_calls match the JSON from ToolCallProcessor

* Log something when generating tool calls

* Add template for Qwen QwQ 32b

* Only log if tool calls have been detected

* API: Fix tool call variable assignments

Jinja functions do not run when variables are called. Use json.dumps
instead. In addition, log the request ID when stating that a tool
call was fired.

Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>

* Add `ToolCallProcessor.dump()` to get the list of processed dicts

* Remove qwen_qwq_32b.jinja

This will be added to the following repository at a later date:
https://github.com/theroyallab/llm-prompt-templates

---------

Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
Co-authored-by: kingbri <8082010+kingbri1@users.noreply.github.com>
2025-03-23 13:23:00 -04:00
.github Actions: Update and add Wiki publish 2025-02-17 23:47:38 -05:00
backends Model: Remove num_experts_per_token 2025-03-19 11:52:10 -04:00
colab Model: Remove num_experts_per_token 2025-03-19 11:52:10 -04:00
common Tree: Format and lint 2025-03-19 11:55:02 -04:00
docker Docker: Add extras to dockerfile 2024-11-15 18:16:48 -05:00
docs Support more common tool variables in templates (tools, message.tool_calls) (#308) 2025-03-23 13:23:00 -04:00
endpoints Support more common tool variables in templates (tools, message.tool_calls) (#308) 2025-03-23 13:23:00 -04:00
loras Implement lora support (#24) 2023-12-08 23:38:08 -05:00
models Tree: Update documentation and configs 2023-11-16 02:30:33 -05:00
sampler_overrides Sampling: Add XTC support 2024-09-24 18:10:52 -04:00
templates Templates: Alter chatml_with_headers to fit huggingface spec 2024-12-30 14:00:44 -05:00
tests Tree: Format 2024-03-13 00:02:55 -04:00
update_scripts Start: Make linux scripts executable 2024-08-03 15:19:31 -04:00
.dockerignore debloat docker build 2024-09-08 00:02:00 +01:00
.gitignore Update .gitignore 2024-09-16 22:48:13 -04:00
api_tokens_sample.yml Improve docker deployment configuration (#163) 2024-08-18 15:19:18 -04:00
config_sample.yml Model: Remove num_experts_per_token 2025-03-19 11:52:10 -04:00
formatting.bat feat: workflows for formatting/linting (#35) 2023-12-22 16:20:35 +00:00
formatting.sh feat: workflows for formatting/linting (#35) 2023-12-22 16:20:35 +00:00
LICENSE Create LICENSE 2023-11-16 17:43:23 -05:00
main.py Args: Add subcommands to run actions 2025-02-10 23:14:22 -05:00
pyproject.toml Dependencies: Update torch, exllamav2, and flash-attn 2025-02-09 01:27:48 -05:00
README.md Update README.md 2025-03-14 15:04:24 -04:00
start.bat Start: Fix startup with new argparser 2024-09-21 14:36:21 -04:00
start.py Update the behavior of start.py so that we can do a full build AND sa… (#293) 2025-03-11 23:54:34 -04:00
start.sh Start: Fix startup with new argparser 2024-09-21 14:36:21 -04:00

TabbyAPI

Python 3.10, 3.11, and 3.12 License: AGPL v3 Discord Server

Developer facing API documentation

Support on Ko-Fi

Important

In addition to the README, please read the Wiki page for information about getting started!

Note

Need help? Join the Discord Server and get the Tabby role. Please be nice when asking questions.

Note

Want to run GGUF models? Take a look at YALS, TabbyAPI's sister project.

A FastAPI based application that allows for generating text using an LLM (large language model) using the Exllamav2 backend

TabbyAPI is also the official API backend server for ExllamaV2.

Disclaimer

This project is marked as rolling release. There may be bugs and changes down the line. Please be aware that you might need to reinstall dependencies if needed.

TabbyAPI is a hobby project made for a small amount of users. It is not meant to run on production servers. For that, please look at other solutions that support those workloads.

Getting Started

Important

Looking for more information? Check out the Wiki.

For a step-by-step guide, choose the format that works best for you:

📖 Read the Wiki Covers installation, configuration, API usage, and more.

🎥 Watch the Video Guide A hands-on walkthrough to get you up and running quickly.

Features

  • OpenAI compatible API
  • Loading/unloading models
  • HuggingFace model downloading
  • Embedding model support
  • JSON schema + Regex + EBNF support
  • AI Horde support
  • Speculative decoding via draft models
  • Multi-lora with independent scaling (ex. a weight of 0.9)
  • Inbuilt proxy to override client request parameters/samplers
  • Flexible Jinja2 template engine for chat completions that conforms to HuggingFace
  • Concurrent inference with asyncio
  • Utilizes modern python paradigms
  • Continuous batching engine using paged attention
  • Fast classifier-free guidance
  • OAI style tool/function calling

And much more. If something is missing here, PR it in!

Supported Model Types

TabbyAPI uses Exllamav2 as a powerful and fast backend for model inference, loading, etc. Therefore, the following types of models are supported:

  • Exl2 (Highly recommended)

  • GPTQ

  • FP16 (using Exllamav2's loader)

In addition, TabbyAPI supports parallel batching using paged attention for Nvidia Ampere GPUs and higher.

Contributing

Use the template when creating issues or pull requests, otherwise the developers may not look at your post.

If you have issues with the project:

  • Describe the issue in detail

  • If you have a feature request, please indicate it as such.

If you have a Pull Request

  • Describe the pull request in detail, what, and why you are changing something

Acknowldgements

TabbyAPI would not exist without the work of other contributors and FOSS projects:

Developers and Permissions

Creators/Developers: