OAI: Add ability to send dummy models

Some APIs require an OAI model to be sent against the models endpoint.
Fix this by adding a GPT 3.5 turbo entry as first in the list to cover
as many APIs as possible.

Signed-off-by: kingbri <bdashore3@proton.me>
This commit is contained in:
kingbri 2023-12-01 00:27:28 -05:00
parent aef411bed5
commit 6493b1d2aa
2 changed files with 6 additions and 0 deletions

View file

@ -23,6 +23,10 @@ model:
# A model can be loaded later via the API.
model_name: A model name
# Sends dummy model names when the models endpoint is queried
# Enable this if the program is looking for a specific OAI model
use_dummy_models: False
# The below parameters apply only if model_name is set
# Maximum model context length (default: 4096)

View file

@ -60,6 +60,8 @@ async def list_models():
draft_model_dir = draft_config.get("draft_model_dir")
models = get_model_list(model_path.resolve(), draft_model_dir)
if model_config.get("use_dummy_models") or False:
models.data.insert(0, ModelCard(id = "gpt-3.5-turbo"))
return models