Commit graph

520 commits

Author SHA1 Message Date
kingbri
d03752e31b Issues: Fix template
Correct Discord invite link.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-06-23 21:52:01 -04:00
kingbri
45fae89af6 Update README
Signed-off-by: kingbri <bdashore3@proton.me>
2024-06-23 21:50:17 -04:00
kingbri
c5ea2abe24 Dependencies: Update ExllamaV2
v0.1.6

Signed-off-by: kingbri <bdashore3@proton.me>
2024-06-23 21:45:04 -04:00
kingbri
d85b526644 Dependencies: Pin numpy
v2.x breaks many upstream dependencies (torch). Pin until repos are
fixed.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-06-23 21:40:09 -04:00
DocShotgun
107436f601
Dependencies: Fix AMD triton (#139) 2024-06-18 15:19:27 +02:00
Brian Dashore
06ee610a97
Update README
Signed-off-by: kingbri <bdashore3@proton.me>
2024-06-17 03:56:47 +00:00
kingbri
c575105e41 ExllamaV2: Cleanup log placements
Move the large import errors into the check functions themselves.
This helps reduce the difficulty in interpreting where errors are
coming from.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-06-16 00:16:03 -04:00
Glenn Maynard
8da7644571
Fix exception unloading models. (#138)
self.generator is None if a model load fails or is cancelled.
2024-06-15 23:44:29 +02:00
DocShotgun
85387d97ad
Fix disabling flash attention in exl2 config (#136)
* Model: Fix disabling flash attention in exl2 config

* Model: Pass no_flash_attn to draft config

* Model: Force torch flash SDP off in compatibility mode
2024-06-12 20:00:46 +02:00
DocShotgun
156b74f3f0
Revision to paged attention checks (#133)
* Model: Clean up paged attention checks

* Model: Move cache_size checks after paged attn checks
Cache size is only relevant in paged mode

* Model: Fix no_flash_attention

* Model: Remove no_flash_attention
Ability to use flash attention is auto-detected, so this flag is unneeded. Uninstall flash attention to disable it on supported hardware.
2024-06-09 17:28:11 +02:00
DocShotgun
55d979b7a5
Update dependencies, support Python 3.12, update for exl2 0.1.5 (#134)
* Dependencies: Add wheels for Python 3.12

* Model: Switch fp8 cache to Q8 cache

* Model: Add ability to set draft model cache mode

* Dependencies: Bump exllamav2 to 0.1.5

* Model: Support Q6 cache

* Config: Add Q6 cache and draft_cache_mode to config sample
2024-06-09 17:27:39 +02:00
DocShotgun
dcd9428325
Model: Warn if cache size is too small for CFG (#132) 2024-06-05 19:40:14 +02:00
DocShotgun
e391d84e40
More extensive checks for paged mode support (#121)
* Model: More extensive checks for paged attention
Previously, TabbyAPI only checked for whether the user's hardware supports flash attention before deciding whether to enabled paged mode.
This adds checks for whether no_flash_attention is set, whether flash-attn is installed, and whether the installed version supports paged attention.

* Tree: Format

* Tree: Lint

* Model: Check GPU architecture first
Check GPU arch prior to checking whether flash attention 2 is installed
2024-06-05 09:33:21 +02:00
turboderp
dbdcb38ad7
Allow either "[" or "{" prefix to support JSON grammar with top level arrays (#129) 2024-06-04 02:32:39 +02:00
turboderp
e889fa3efe
Bump exllamav2 to v0.1.4 (#128) 2024-06-04 02:32:08 +02:00
Orion
6cc3bd9752
feat: list support in message.content (#122) 2024-06-03 19:57:15 +02:00
turboderp
1951f7521c
Forward exceptions from _stream_collector to stream_generate_(chat)_completion (#126) 2024-06-03 19:42:45 +02:00
turboderp
0eb8fa5d1e
[fix] Bring draft progress and model progress in sync with model loader (#125)
* Bring draft progress and model progress in sync with model loader

* Fix formatting
2024-06-03 19:41:02 +02:00
turboderp
a011c17488 Revert "Forward exceptions from _stream_collector to stream_generate_chat_completion"
This reverts commit 1bb8d1a312.
2024-06-02 15:37:37 +02:00
turboderp
1bb8d1a312 Forward exceptions from _stream_collector to stream_generate_chat_completion 2024-06-02 15:13:30 +02:00
kingbri
e95e67a000 OAI: Add validation to "n"
n must be greater than 1 to generate.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-28 00:52:30 -04:00
kingbri
e2a8b6e8ae OAI: Add "n" support for streaming generations
Use a queue-based system to get choices independently and send them
in the overall streaming payload. This method allows for unordered
streaming of generations.

The system is a bit redundant, so maybe make the code more optimized
in the future.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-28 00:52:30 -04:00
kingbri
c8371e0f50 OAI: Copy gen params for "n"
For multiple generations in the same request, nested arrays kept their
original reference, resulting in duplications. This will occur with
any collection type.

For optimization purposes, a deepcopy isn't run for the first iteration
since original references are created.

This is not the most elegant solution, but it works for the described
cases.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-28 00:52:30 -04:00
kingbri
b944f8d756 OAI: Add "n" for non-streaming generations
This adds the ability to add multiple choices to a generation. This
is only available for non-streaming gens for now, it requires some
more work to port over to streaming.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-28 00:52:30 -04:00
kingbri
8d31a5aed1 Dependencies: Update Flash Attention 2
v2.5.9.post1

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-28 00:45:35 -04:00
Brian Dashore
516b52b341
Merge pull request #112 from DocShotgun/main
Separate new prompt tokens from those reused from cache in metric logging
2024-05-27 18:04:43 -04:00
kingbri
19961f4126 Dependencies: Update ExllamaV2
v0.1.1

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-27 13:38:07 -04:00
kingbri
04cbed16e8 Update README
Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-27 13:37:57 -04:00
kingbri
4087586449 Start: Create config.yml if it doesn't exist
While TabbyAPI doesn't need a config.yml to run, new users can get
confused by the task of copying config_sample.yml to config.yml.
Therefore, automatically do this in the start script to immediately
expose options to the user.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-26 21:37:52 -04:00
DocShotgun
7084081b1f Tree: Lint 2024-05-26 18:27:30 -07:00
kingbri
116cf56c87 Model: Auto-round cache size on init
Cache size must be a multiple of 256 to work properly in ExllamaV2.
Take the config value and set the cache size to one multiple above
the remainder of the cache size divided by 256.

This is because cache size can never be lower than max_seq_len.
If max_seq_len isn't a multiple of 256, this method will never
yield a number that's lower than max_seq_len since it's no longer
a source of truth.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-26 21:24:54 -04:00
DocShotgun
ce5e2ec8de Logging: Clarify new vs cached tokens in prompt processing 2024-05-26 18:21:17 -07:00
Brian Dashore
3dcae8b023
Merge pull request #111 from DocShotgun/main
Add support for specifying k/v cache size
2024-05-26 20:52:21 -04:00
kingbri
bec919e202 Config: Change cache_size description and location
Makes more sense to place cache_size with the other cache options.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-26 20:50:56 -04:00
DocShotgun
7ab7ffd562 Tree: Format 2024-05-26 15:48:18 -07:00
DocShotgun
767e6a798a API + Model: Add support for specifying k/v cache size 2024-05-26 14:17:01 -07:00
kingbri
d710a1b441 OAI: Switch to background task for disconnect checks
Waiting for request disconnect takes some extra time and allows
generation chunks to pile up, resulting in large payloads being sent
at once not making up a smooth stream.

Use the polling method in non-streaming requests by creating a background
task and then check if the task is done, signifying that the request
has been disconnected.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-26 13:52:20 -04:00
kingbri
660f9b8432 OAI: Fix request cancellation behavior
Depending on the day of the week, Starlette can work with a CancelledError
or using await request.is_disconnected(). Run the same behavior for both
cases and allow cancellation.

Streaming requests now set an event to cancel the batched job and break
out of the generation loop.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-26 13:00:33 -04:00
kingbri
094c7b1734 Model: Fix paged and FA2 checks
If a user is using GPU split, check compute capability on only those
GPUs. Autosplit assumes that all GPUs will be used.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-26 11:29:31 -04:00
kingbri
9fbbc5afca Tree: Swap from map to list comprehensions
List comprehensions are the more "pythonic" way to approach mapping
values to a list. They're also more flexible across different collection
types rather than the inbuilt map method. It's best to keep one convention
rather than splitting down two.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
46d0d13914 Model/Grammar: Fix filter append call
No need to use extend if the array is length 1.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
a46ee62d03 Model: Clarify warning and device check on load
FA2 v2.5.7 and up is not supported below ampere and on AMD GPUs.
Clarify the error message and explain what happens as a result.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
47582c2440 Dependencies: Update ExllamaV2
v0.1.0

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
43cd7f57e8 API + Model: Add blocks and checks for various load requests
Add a sequential lock and wait until jobs are completed before executing
any loading requests that directly alter the model. However, we also
need to block any new requests that come in until the load is finished,
so add a condition that triggers once the lock is free.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
408c66a1f2 Model: Change FA2 and paged attention checks
The dynamic generator requires Flash attention 2.5.7 or higher to
be installed. This is only supported on Nvidia's 30 series and higher.

If a card is AMD or lower than the 30 series, switch to compatability
mode which functions the same way as the older generator, except
without parallel batching and any features that depend on it, such as
CFG.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
c2d3675408 Model: Add min_tokens support
In the form of min_new_tokens. Stopping strings take priority.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
5f0fb9c4ff Model: Add CFG support
Dynamic generator needed multiple prompts to be tokenized and sent
for them to be sampled in serial, but generated in parallel.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
06ff47e2b4 Model: Use true async jobs and add logprobs
The new async dynamic job allows for native async support without the
need of threading. Also add logprobs and metrics back to responses.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
32ae62feac Model: Add filter support to dynamic gen
Dynamic gen takes in filters differently. Adjust to set the filter list
per class rather than in the generation function.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00
kingbri
8ccd8fe5f8 Model: Initial dynamic generator support
Adds basic support for ExllamaV2's dynamic generator. Can generate
a streaming and non-streaming completion.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-05-25 21:16:14 -04:00