No description
Find a file
kingbri 27d2d5f3d2 Config + Model: Allow for default fallbacks from config for model loads
Previously, the parameters under the "model" block in config.yml only
handled the loading of a model on startup. This meant that any subsequent
API request required each parameter to be filled out or use a sane default
(usually defaults to the model's config.json).

However, there are cases where admins may want an argument from the
config to apply if the parameter isn't provided in the request body.
To help alleviate this, add a mechanism that works like sampler overrides
where users can specify a flag that acts as a fallback.

Therefore, this change both preserves the source of truth of what
parameters the admin is loading and adds some convenience for users
that want customizable defaults for their requests.

This behavior may change in the future, but I think it solves the
issue for now.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-07-06 17:50:58 -04:00
.github Issues: Fix template 2024-06-23 21:52:01 -04:00
backends/exllamav2 Dependencies: Update ExllamaV2 2024-06-23 21:45:04 -04:00
colab Colab: Update 2024-03-24 21:48:48 -04:00
common Config + Model: Allow for default fallbacks from config for model loads 2024-07-06 17:50:58 -04:00
docker Docker: Add var to pull on build 2024-04-19 21:06:34 -04:00
endpoints Config + Model: Allow for default fallbacks from config for model loads 2024-07-06 17:50:58 -04:00
loras Implement lora support (#24) 2023-12-08 23:38:08 -05:00
models Tree: Update documentation and configs 2023-11-16 02:30:33 -05:00
sampler_overrides Samplers: Add example override for generate_window 2024-05-12 00:39:01 -07:00
templates Templates: Modify alpaca and chatml 2024-03-27 22:28:41 -04:00
tests Tree: Format 2024-03-13 00:02:55 -04:00
.gitignore Start: Prompt user for GPU/lib 2024-03-20 15:21:37 -04:00
config_sample.yml Config + Model: Allow for default fallbacks from config for model loads 2024-07-06 17:50:58 -04:00
formatting.bat feat: workflows for formatting/linting (#35) 2023-12-22 16:20:35 +00:00
formatting.sh feat: workflows for formatting/linting (#35) 2023-12-22 16:20:35 +00:00
LICENSE Create LICENSE 2023-11-16 17:43:23 -05:00
main.py Main: Add await to an async function 2024-05-02 21:24:43 -04:00
pyproject.toml Dependencies: Update ExllamaV2 2024-06-23 21:45:04 -04:00
README.md Update README 2024-06-23 21:50:17 -04:00
start.bat Tree: Format and cleanup start 2023-12-27 01:17:31 -05:00
start.py Start: Create config.yml if it doesn't exist 2024-05-26 21:37:52 -04:00
start.sh Start: Add shell script 2023-12-27 23:53:14 -05:00

TabbyAPI

Python 3.10, 3.11, and 3.12 License: AGPL v3 Discord Server

Support on Ko-Fi

Important

In addition to the README, please read the Wiki page for information about getting started!

Note

Need help? Join the Discord Server and get the Tabby role. Please be nice when asking questions.

A FastAPI based application that allows for generating text using an LLM (large language model) using the Exllamav2 backend

Disclaimer

This project is marked rolling release. There may be bugs and changes down the line. Please be aware that you might need to reinstall dependencies if needed.

TabbyAPI is a hobby project solely for a small amount of users. It is not meant to run on production servers. For that, please look at other backends that support those workloads.

Getting Started

Important

This README is not for getting started. Please read the Wiki.

Read the Wiki for more information. It contains user-facing documentation for installation, configuration, sampling, API usage, and so much more.

Supported Model Types

TabbyAPI uses Exllamav2 as a powerful and fast backend for model inference, loading, etc. Therefore, the following types of models are supported:

  • Exl2 (Highly recommended)

  • GPTQ

  • FP16 (using Exllamav2's loader)

In addition, TabbyAPI supports parallel batching using paged attention for Nvidia Ampere GPUs and higher.

Alternative Loaders/Backends

If you want to use a different model type or quantization method than the ones listed above, here are some alternative backends with their own APIs:

Contributing

Use the template when creating issues or pull requests, otherwise the developers may not look at your post.

If you have issues with the project:

  • Describe the issue in detail

  • If you have a feature request, please indicate it as such.

If you have a Pull Request

  • Describe the pull request in detail, what, and why you are changing something

Developers and Permissions

Creators/Developers: