tabbyAPI-ollama/common
kingbri c82697fef2 API: Fix issues with concurrent requests and queueing
This is the first in many future commits that will overhaul the API
to be more robust and concurrent. The model is admin-first where the
admin can do anything in-case something goes awry.

Previously, calls to long running synchronous background tasks would
block the entire API, making it ignore any terminal signals until
generation is completed.

To fix this, levrage FastAPI's run_in_threadpool to offload the long
running tasks to another thread. However, signals to abort the process
still kept the background thread running and made the terminal hang.

This was due to an issue with Uvicorn not propegating the SIGINT signal
across threads in its event loop. To fix this in a catch-all way, run
the API processes in a separate thread so the main thread can still
kill the process if needed.

In addition, make request error logging more robust and refer to the
console for full error logs rather than creating a long message on the
client-side.

Finally, add state checks to see if a model is fully loaded before
generating a completion.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-03-04 23:21:40 -05:00
..
args.py Config: Add experimental torch cuda malloc backend 2024-02-14 21:45:56 -05:00
auth.py Auth: Create keys on different exception 2024-02-04 01:56:42 -05:00
config.py Launch: Make exllamav2 requirement more friendly 2024-02-02 23:36:17 -05:00
gen_logging.py Tree: Refactor code organization 2024-01-25 00:15:40 -05:00
generators.py API: Fix issues with concurrent requests and queueing 2024-03-04 23:21:40 -05:00
logger.py Tree: Refactor code organization 2024-01-25 00:15:40 -05:00
sampling.py Model: Add EBNF grammar support 2024-02-24 23:40:11 -05:00
templating.py API: Add template switching and unload endpoints 2024-01-25 00:15:40 -05:00
utils.py API: Fix issues with concurrent requests and queueing 2024-03-04 23:21:40 -05:00