* improve validation
* remove to_gen_params functions
* update changes for all endpoint types
* OAI: Fix calls to generation
Chat completion and completion need to have prompt split out before
pushing to the backend.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Convert Top-K values of -1 to 0
Some OAI implementations use -1 as disabled instead of 0. Therefore,
add a coalesce case.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Format and space out
Make the code more readable.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Fix mirostat
Field items are nested in data within a Pydantic FieldInfo
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Format
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Fix banned_tokens and allowed_tokens conversion
If the provided string has whitespace, trim it before splitting.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Add helpful log to dry_sequence_breakers
Let the user know if the sequence errors out.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Apply validators in right order
Validators need to be applied in order from top to bottom, this is why
the after validator was not being applied properly.
Set the model to validate default params for sampler override purposes.
This can be turned off if there are unclear errors.
Signed-off-by: kingbri <bdashore3@proton.me>
* Endpoints: Format
Cleanup and semantically fix field validators
Signed-off-by: kingbri <bdashore3@proton.me>
* Kobold: Update validators and fix parameter application
Validators on parent fields cannot see child fields. Therefore,
validate using the child fields instead and alter the parent field
data from there.
Also fix badwordsids casting.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Remove validate defaults and fix mirostat
If a user sets an override to a non-default value, that's their
own fault.
Run validator on the actual mirostat_mode parameter rather than
the alternate mirostat parameter.
Signed-off-by: kingbri <bdashore3@proton.me>
* Kobold: Rework badwordsids
Currently, this serves to ban the EOS token. All other functionality
was legacy, so remove it.
Signed-off-by: kingbri <bdashore3@proton.me>
* Model: Remove HuggingfaceConfig
This was only necessary for badwordsids. All other fields are handled
by exl2. Keep the class as a stub if it's needed again.
Signed-off-by: kingbri <bdashore3@proton.me>
* Kobold: Bump kcpp impersonation
TabbyAPI supports XTC now.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Change alias to validation_alias
Reduces the probability for errors and makes the class consistent.
Signed-off-by: kingbri <bdashore3@proton.me>
* OAI: Use constraints for validation
Instead of adding a model_validator, use greater than or equal to
constraints provided by Pydantic.
Signed-off-by: kingbri <bdashore3@proton.me>
* Tree: Lint
Signed-off-by: kingbri <bdashore3@proton.me>
---------
Co-authored-by: SecretiveShell <84923604+SecretiveShell@users.noreply.github.com>
Co-authored-by: kingbri <bdashore3@proton.me>
* Model: Fix inline loading and draft key
There was a lack of foresight between the new config.yml and how
it was structured. The "draft" key became "draft_model" without updating
both the API request and inline loading keys.
For the API requests, still support "draft" as legacy, but the "draft_model"
key is preferred.
Signed-off-by: kingbri <bdashore3@proton.me>
* OAI: Add draft model dir to inline load
Was not pushed before and caused errors of the kwargs being None.
Signed-off-by: kingbri <bdashore3@proton.me>
* Model: Fix draft args application
Draft model args weren't applying since there was a reset due to how
the old override behavior worked.
Signed-off-by: kingbri <bdashore3@proton.me>
* OAI: Change embedding model load params
Use embedding_model_name to be inline with the config.
Signed-off-by: kingbri <bdashore3@proton.me>
* API: Fix parameter for draft model load
Alias name to draft_model_name.
Signed-off-by: kingbri <bdashore3@proton.me>
* API: Fix parameter for template switch
Add prompt_template_name to be more descriptive.
Signed-off-by: kingbri <bdashore3@proton.me>
* API: Fix parameter for model load
Alias name to model_name for config parity.
Signed-off-by: kingbri <bdashore3@proton.me>
* API: Add alias documentation
Signed-off-by: kingbri <bdashore3@proton.me>
---------
Signed-off-by: kingbri <bdashore3@proton.me>
Make it so any message role can be parsed from a list. Not really
sure why this is the case because system and assistant shouldn't be
sending data other than text, but it also doesn't make much sense
to be extremely strict with roles either.
Signed-off-by: kingbri <bdashore3@proton.me>
When the request is cancelled, cancel the load task. In addition,
when checking if a model container exists, also check if the model
is fully loaded.
Signed-off-by: kingbri <bdashore3@proton.me>
If a user requesting a model change isn't admin, error.
Better to place the load function before the generate functions.
Signed-off-by: kingbri <bdashore3@proton.me>
Metadata is generated via a template's module. This requires a single
iteration through the template. If a template tries to access a passed
variable that doesn't exist, it will error.
Therefore, generate the metadata at runtime to prevent these errors
from happening. To optimize further, cache the metadata after the
first generation to prevent the expensive call of making a template
module.
Signed-off-by: kingbri <bdashore3@proton.me>
* returning stop str if exists from gen
* added chat template for firefunctionv2
* pulling tool vars from template
* adding parsing for tool inputs/outputs
* passing tool data from endpoint to chat template, adding tool_start to the stop list
* loosened typing on the response tool call, leaning more on the user supplying a quality schema if they want a particular format
* non streaming generation prototype
* cleaning template
* Continued work with type, ingestion into template, and chat template for fire func
* Correction - streaming toolcall comes back as delta obj not inside chatcomprespchoice per chat_completion_chunk.py inside OAI lib.
* Ruff Formating
* Moved stop string and tool updates out of prompt creation func
Updated tool pydantic to match OAI
Support for streaming
Updated generate tool calls to use flag within chat_template and insert tool reminder
* Llama 3.1 chat templates
Updated fire func template
* renamed llama3.1 to chatml_with_headers..
* update name of template
* Support for calling a tool start token rather than the string.
Simplified tool_params
Warning when gen_settings are being overidden becuase user set temp to 0
Corrected schema and tools to correct types for function args. Str for some reason
* draft groq tool use model template
* changed headers to vars for readablity (but mostly because some models are weird about newlines after headers, so this is an easier way to change globally)
* Clean up comments and code in chat comp
* Post processed tool call to meet OAI spec rather than forcing model to write json in a string in the middle of the call.
* changes example back to args as json rather than string of json
* Standardize chat templates to each other
* cleaning/rewording
* stop elements can also be ints (tokens)
* Cleaning/formatting
* added special tokens for tools and tool_response as specified in description
* Cleaning
* removing aux templates - going to live in llm-promp-templates repo instead
* Tree: Format
Signed-off-by: kingbri <bdashore3@proton.me>
* Chat Completions: Don't include internal tool variables in OpenAPI
Use SkipJsonSchema to supress inclusion with the OpenAPI JSON. The
location of these variables may need to be changed in the future.
Signed-off-by: kingbri <bdashore3@proton.me>
* Templates: Deserialize metadata on template load
Since we're only looking for specific template variables that are
static in the template, it makes more sense to render when the template
is initialized.
Signed-off-by: kingbri <bdashore3@proton.me>
* Tools: Fix comments
Adhere to the format style of comments in the rest of the project.
Signed-off-by: kingbri <bdashore3@proton.me>
---------
Co-authored-by: Ben Gitter <gitterbd@gmail.com>
Signed-off-by: kingbri <bdashore3@proton.me>
Use Infinity as a separate backend and handle the model within the
common module. This separates out the embeddings model from the endpoint
which allows for model loading/unloading in core.
Signed-off-by: kingbri <bdashore3@proton.me>
Infinity-emb is an async batching engine for embeddings. This is
preferable to sentence-transformers since it handles scalable usecases
without the need for external thread intervention.
Signed-off-by: kingbri <bdashore3@proton.me>
Place OAI specific routes in the appropriate folder. This is in
preperation for adding new API servers that can be optionally enabled.
Signed-off-by: kingbri <bdashore3@proton.me>
Uvicorn can log in both the request disconnect handler and the
CancelledError. However, these sometimes don't work and both
need to be checked. But, don't log twice if one works.
Signed-off-by: kingbri <bdashore3@proton.me>
Identify which request is being processed to help users disambiguate
which logs correspond to which request.
Signed-off-by: kingbri <bdashore3@proton.me>
Place the logic into their proper utility functions and cleanup
the code with formatting.
Also, OAI's docs specify that a [DONE] return is needed when everything
is finished.
Signed-off-by: kingbri <bdashore3@proton.me>
API keys are not allowed to view all the admin's models, templates,
draft models, loras, etc. Basically anything that can be viewed
on the filesystem outside of anything that's currently loaded is
not allowed to be returned unless an admin key is present.
This change helps preserve user privacy while not erroring out on
list endpoints that the OAI spec requires.
Signed-off-by: kingbri <bdashore3@proton.me>
Use a queue-based system to get choices independently and send them
in the overall streaming payload. This method allows for unordered
streaming of generations.
The system is a bit redundant, so maybe make the code more optimized
in the future.
Signed-off-by: kingbri <bdashore3@proton.me>
For multiple generations in the same request, nested arrays kept their
original reference, resulting in duplications. This will occur with
any collection type.
For optimization purposes, a deepcopy isn't run for the first iteration
since original references are created.
This is not the most elegant solution, but it works for the described
cases.
Signed-off-by: kingbri <bdashore3@proton.me>
This adds the ability to add multiple choices to a generation. This
is only available for non-streaming gens for now, it requires some
more work to port over to streaming.
Signed-off-by: kingbri <bdashore3@proton.me>
Waiting for request disconnect takes some extra time and allows
generation chunks to pile up, resulting in large payloads being sent
at once not making up a smooth stream.
Use the polling method in non-streaming requests by creating a background
task and then check if the task is done, signifying that the request
has been disconnected.
Signed-off-by: kingbri <bdashore3@proton.me>
Depending on the day of the week, Starlette can work with a CancelledError
or using await request.is_disconnected(). Run the same behavior for both
cases and allow cancellation.
Streaming requests now set an event to cancel the batched job and break
out of the generation loop.
Signed-off-by: kingbri <bdashore3@proton.me>
Add a sequential lock and wait until jobs are completed before executing
any loading requests that directly alter the model. However, we also
need to block any new requests that come in until the load is finished,
so add a condition that triggers once the lock is free.
Signed-off-by: kingbri <bdashore3@proton.me>
The new async dynamic job allows for native async support without the
need of threading. Also add logprobs and metrics back to responses.
Signed-off-by: kingbri <bdashore3@proton.me>
response_prefix is used to add a prefix before generating the next
message. This is used in many cases such as continuining a prompt
(see #96).
Also if a template has BOS token specified, add_bos_token will
append two BOS tokens. Add a check which strips a starting BOS token
from the prompt if it exists.
Signed-off-by: kingbri <bdashore3@proton.me>
Having many utility functions for initialization doesn't make much sense.
Instead, handle anything regarding template creation inside the
class which reduces the amount of function imports.
Signed-off-by: kingbri <bdashore3@proton.me>
A chat completion can now declare extra template_vars to pass when
a template is rendered, opening up the possibility of using state
outside of huggingface's parameters.
Signed-off-by: kingbri <bdashore3@proton.me>
Template modules grab all set vars, including ones that use runtime
vars. If a template var is set to a runtime var and a module is created,
an UndefinedError fires.
Use make_module instead to pass runtime vars when creating a template
module.
Resolves#92
Signed-off-by: kingbri <bdashore3@proton.me>
Best to move the inner workings within its inner function. Also fix
an edge case where stop strings can be a string rather than an array.
Signed-off-by: kingbri <bdashore3@proton.me>
When the model is processing a prompt, add the ability to abort
on request cancellation. This is also a catch for a SIGINT.
Signed-off-by: kingbri <bdashore3@proton.me>
OAI expects finish_reason to be "stop" or "length" (there are others,
but they're not in the current scope of this project).
Make all completions and chat completions responses return this
from the model generation itself rather than putting a placeholder.
Signed-off-by: kingbri <bdashore3@proton.me>
Run these iterators on the background thread. On startup, the API
spawns a background thread as needed to run sync code on without blocking
the event loop.
Use asyncio's run_thread function since it allows for errors to be
propegated.
Signed-off-by: kingbri <bdashore3@proton.me>
Async generation helps remove many roadblocks to managing tasks
using threads. It should allow for abortables and modern-day paradigms.
NOTE: Exllamav2 itself is not an asynchronous library. It's just
been added into tabby's async nature to allow for a fast and concurrent
API server. It's still being debated to run stream_ex in a separate
thread or manually manage it using asyncio.sleep(0)
Signed-off-by: kingbri <bdashore3@proton.me>