This parameter is way too confusing and does not make sense in
the modern LLM space.
Change approved by all maintainers.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
Apparently the "mirostat" parameter has been updated by frontends
to pass a number. ExllamaV2 expects a boolean, but most pass a number
anyway, so just alias mirostat_mode and mirostat together.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
kwargs is pretty ugly when figuring out which arguments to use. The
base requests falls back to defaults anyways, so pass in the params
object as is.
However, since Python's typing isn't like TypeScript where types
can be transformed, the type hinting has a possiblity of None showing
up despite there always being a value for some params.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
The base sampler request already specifies the defaults, so don't
unwrap in this way.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
Log the parameters passed into the generate gen function rather than
the generation settings to reduce complexity.
Signed-off-by: kingbri <8082010+kingbri1@users.noreply.github.com>
* improve validation
* remove to_gen_params functions
* update changes for all endpoint types
* OAI: Fix calls to generation
Chat completion and completion need to have prompt split out before
pushing to the backend.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Convert Top-K values of -1 to 0
Some OAI implementations use -1 as disabled instead of 0. Therefore,
add a coalesce case.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Format and space out
Make the code more readable.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Fix mirostat
Field items are nested in data within a Pydantic FieldInfo
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Format
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Fix banned_tokens and allowed_tokens conversion
If the provided string has whitespace, trim it before splitting.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Add helpful log to dry_sequence_breakers
Let the user know if the sequence errors out.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Apply validators in right order
Validators need to be applied in order from top to bottom, this is why
the after validator was not being applied properly.
Set the model to validate default params for sampler override purposes.
This can be turned off if there are unclear errors.
Signed-off-by: kingbri <bdashore3@proton.me>
* Endpoints: Format
Cleanup and semantically fix field validators
Signed-off-by: kingbri <bdashore3@proton.me>
* Kobold: Update validators and fix parameter application
Validators on parent fields cannot see child fields. Therefore,
validate using the child fields instead and alter the parent field
data from there.
Also fix badwordsids casting.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Remove validate defaults and fix mirostat
If a user sets an override to a non-default value, that's their
own fault.
Run validator on the actual mirostat_mode parameter rather than
the alternate mirostat parameter.
Signed-off-by: kingbri <bdashore3@proton.me>
* Kobold: Rework badwordsids
Currently, this serves to ban the EOS token. All other functionality
was legacy, so remove it.
Signed-off-by: kingbri <bdashore3@proton.me>
* Model: Remove HuggingfaceConfig
This was only necessary for badwordsids. All other fields are handled
by exl2. Keep the class as a stub if it's needed again.
Signed-off-by: kingbri <bdashore3@proton.me>
* Kobold: Bump kcpp impersonation
TabbyAPI supports XTC now.
Signed-off-by: kingbri <bdashore3@proton.me>
* Sampling: Change alias to validation_alias
Reduces the probability for errors and makes the class consistent.
Signed-off-by: kingbri <bdashore3@proton.me>
* OAI: Use constraints for validation
Instead of adding a model_validator, use greater than or equal to
constraints provided by Pydantic.
Signed-off-by: kingbri <bdashore3@proton.me>
* Tree: Lint
Signed-off-by: kingbri <bdashore3@proton.me>
---------
Co-authored-by: SecretiveShell <84923604+SecretiveShell@users.noreply.github.com>
Co-authored-by: kingbri <bdashore3@proton.me>
* fix config file loader
* prune nonetype values from config dict
fixes default values not initialising properly
* Utils: Shrink None removal function
It is more concise to use a list and dict collection if necessary
rather than iterating through and checking each value. Tested and
works with Tabby's cases.
Signed-off-by: kingbri <bdashore3@proton.me>
---------
Signed-off-by: kingbri <bdashore3@proton.me>
Co-authored-by: kingbri <bdashore3@proton.me>
Loaders that read use a safe type while loaders that write use both
round-trip and safe options.
Also don't create module-level parsers where they're not needed.
Signed-off-by: kingbri <bdashore3@proton.me>
Using aiofiles, there's no longer a possiblity of blocking file operations
that can hang up the event loop. In addition, partially migrate
classes to use asynchronous init instead of the normal python magic method.
The only exception is config, since that's handled in the synchonous
init before the event loop starts.
Signed-off-by: kingbri <bdashore3@proton.me>
Adds DRY support based on the current exl2 dev API. Only change for
optimization is dry_max_ngram instead of using a closed range.
Currently, DRY range is aliased to dry_max_ngram.
Signed-off-by: kingbri <bdashore3@proton.me>
* returning stop str if exists from gen
* added chat template for firefunctionv2
* pulling tool vars from template
* adding parsing for tool inputs/outputs
* passing tool data from endpoint to chat template, adding tool_start to the stop list
* loosened typing on the response tool call, leaning more on the user supplying a quality schema if they want a particular format
* non streaming generation prototype
* cleaning template
* Continued work with type, ingestion into template, and chat template for fire func
* Correction - streaming toolcall comes back as delta obj not inside chatcomprespchoice per chat_completion_chunk.py inside OAI lib.
* Ruff Formating
* Moved stop string and tool updates out of prompt creation func
Updated tool pydantic to match OAI
Support for streaming
Updated generate tool calls to use flag within chat_template and insert tool reminder
* Llama 3.1 chat templates
Updated fire func template
* renamed llama3.1 to chatml_with_headers..
* update name of template
* Support for calling a tool start token rather than the string.
Simplified tool_params
Warning when gen_settings are being overidden becuase user set temp to 0
Corrected schema and tools to correct types for function args. Str for some reason
* draft groq tool use model template
* changed headers to vars for readablity (but mostly because some models are weird about newlines after headers, so this is an easier way to change globally)
* Clean up comments and code in chat comp
* Post processed tool call to meet OAI spec rather than forcing model to write json in a string in the middle of the call.
* changes example back to args as json rather than string of json
* Standardize chat templates to each other
* cleaning/rewording
* stop elements can also be ints (tokens)
* Cleaning/formatting
* added special tokens for tools and tool_response as specified in description
* Cleaning
* removing aux templates - going to live in llm-promp-templates repo instead
* Tree: Format
Signed-off-by: kingbri <bdashore3@proton.me>
* Chat Completions: Don't include internal tool variables in OpenAPI
Use SkipJsonSchema to supress inclusion with the OpenAPI JSON. The
location of these variables may need to be changed in the future.
Signed-off-by: kingbri <bdashore3@proton.me>
* Templates: Deserialize metadata on template load
Since we're only looking for specific template variables that are
static in the template, it makes more sense to render when the template
is initialized.
Signed-off-by: kingbri <bdashore3@proton.me>
* Tools: Fix comments
Adhere to the format style of comments in the rest of the project.
Signed-off-by: kingbri <bdashore3@proton.me>
---------
Co-authored-by: Ben Gitter <gitterbd@gmail.com>
Signed-off-by: kingbri <bdashore3@proton.me>
Some of the parameters the API provides are aliases for their OAI
equivalents. It makes more sense to move them to the common file.
Signed-off-by: kingbri <bdashore3@proton.me>
List comprehensions are the more "pythonic" way to approach mapping
values to a list. They're also more flexible across different collection
types rather than the inbuilt map method. It's best to keep one convention
rather than splitting down two.
Signed-off-by: kingbri <bdashore3@proton.me>
If an override was iterable, any modifications to the returned value
would alter the reference to the global storage dict.
Therefore, copy the structure if it's an iterable so any modification
won't alter the original override. Also apply this for the function
that checks for forced overrides.
Signed-off-by: kingbri <bdashore3@proton.me>
From exllamav2: List of strings that the generator will refuse to output. As soon as a partial match happens, a checkpoint is saved that the generator can rewind to if need be. Subsequent tokens are then held until the full string is resolved (match or no match) and either emitted or discarded, accordingly.
Bans the EOS token until the generation reaches a minimum length. This will not prevent the model from otherwise ending the generation early by outputting other stop conditions.
This reverts commit 7556dcf134.
The Optionals allowed requests to send "null" in the body for optional
parameters which should be allowed.
Signed-off-by: kingbri <bdashore3@proton.me>
Appends the banned tokens to the generation. This is equivalent of
setting logit bias to -100 on a specific set of tokens.
Signed-off-by: kingbri <bdashore3@proton.me>
Additive is used to add collections together. Currently, it's used
for lists, but it can be used for dictionaries in the future.
Signed-off-by: kingbri <bdashore3@proton.me>
If max_tokens is None, it automatically scales to fill up the context.
This does not mean the generation will fill up that context since
EOS stops also exist.
Originally suggested by #86
Signed-off-by: kingbri <bdashore3@proton.me>
Speculative ngram decoding is like speculative decoding without the
draft model. It's not as useful because it only decodes on predictable
sequences, but it depends on the usecase.
Signed-off-by: kingbri <bdashore3@proton.me>
Use the module singleton pattern to share global state. This can also
be a modified version of the Global Object Pattern. The main reason
this pattern is used is for ease of use when handling global state
rather than adding extra dependencies for a DI parameter.
Signed-off-by: kingbri <bdashore3@proton.me>
Loguru is a flexible logger that allows for easier hooking and imports
into Rich with no problems. Also makes progress bars stick to the
bottom of the terminal window.
Signed-off-by: kingbri <bdashore3@proton.me>
Using the Outlines library, add support to supply EBNF strings and
pass them to the library for parsing.
From there, a wrapper is created and a filter is passed to generation.
Replace with an in-house solution at some point that's more flexible.
Signed-off-by: kingbri <bdashore3@proton.me>
Add the ability to constrain the return value of a model to be JSON.
Built using the JSON schema standard to define the properties of what
the model should return.
This feature should be more accurate than using GBNF/EBNF to yield
the same results due to the use of lmformatenforcer.
GBNF/EBNF will be added in a different commit/branch.
Signed-off-by: kingbri <bdashore3@proton.me>
Injecting into Pydantic fields caused issues with serialization for
documentation rendering. Rather than reinvent the wheel again,
switch to a chain of if statements for now. This may change in the
future if subclasses from the base sampler request need to be
validated as well.
Signed-off-by: kingbri <bdashore3@proton.me>
Rather than maintaining yet another function to validate sampler
ranges/values, embed them in fields which allows for less
maintainence in the future.
Also add validation for existing samplers that can corrupt
the sampling stack if set improperly.
Signed-off-by: kingbri <bdashore3@proton.me>
Returns token offsets, selected tokens, probabilities of tokens
post-sampling, and normalized probability of selecting a token
pre-sampling (for efficiency purposes).
Only for text completions. Chat completions in a later commit.
Signed-off-by: kingbri <bdashore3@proton.me>