UV is now supported as first-party in tabbyAPI's start script, so
add a dedicated section to it and recommend over miniconda.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
Change the sampling subsection to sampler overrides and add a warning
about the default preset.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
Works on cuda 12.4 and up. If CUDA doesn't exist, then don't enable
the backend. This is an env var that needs to be set, so it's not really
possible to set it via config.yml.
This used to be experimental, but it's probably fine to keep it enabled
since it only provides a benefit.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
* Add non-JSON version of `tools` and `functions` to `template_vars`.
Increase the compatibility with VLLM templates which use a non-JSON tools object.
* Add list of tool template variables to the documentation
* Use Jinja templates to provide `tools_json` and `functions_json`
This should be functionally equivelant, but the JSON won't be produced
unless it's needed.
* Make message.tool_calls match the JSON from ToolCallProcessor
* Log something when generating tool calls
* Add template for Qwen QwQ 32b
* Only log if tool calls have been detected
* API: Fix tool call variable assignments
Jinja functions do not run when variables are called. Use json.dumps
instead. In addition, log the request ID when stating that a tool
call was fired.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
* Add `ToolCallProcessor.dump()` to get the list of processed dicts
* Remove qwen_qwq_32b.jinja
This will be added to the following repository at a later date:
https://github.com/theroyallab/llm-prompt-templates
---------
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
Co-authored-by: kingbri <8082010+kingbri1@users.noreply.github.com>
This shouldn't even be an exposed option since changing it always
breaks inference with the model. Let the model's config.json handle
it.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
In ExllamaV2, if a model has YaRN support, linear RoPE options are
not applied. Users can set max_seq_len and exl2 will take care of
the rest.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>