Fixes 400
and 422
errors when using Mistral models in OpenWebUI
If you're using OpenWebUI with Mistral AI, you may run into issues like:
422: OpenWebUI: Server Connection Error
when defining a Mistral model as your base model.400: OpenWebUI: Server Connection Error
when clicking "Continue Response".
These errors happen because OpenWebUI assumes OpenAI API semantics that Mistral's API doesn't fully support — such as unsupported fields or improperly structured messages.
This Python-based proxy acts as a middleware between OpenWebUI and the Mistral API:
- Cleans up unsupported fields in the payload (e.g.,
logit_bias
,user
, etc.). - Ensures
Continue Response
requests are handled properly by appending a"Continue response"
message if needed. - Proxies all other requests transparently, maintaining headers and stream support.
-
Install dependencies
pip install flask requests python-dotenv
-
Create a
.env
file with your Mistral API key:MISTRAL_API_KEY=your_actual_key_here
-
Run the proxy server
python proxy.py
By default it runs at:
http://localhost:8880
-
Configure OpenWebUI
-
Go to Settings → Connections → OpenAI API
-
Set:
- API Base:
http://localhost:8880/v1
- API Key: Anything (it will be ignored)
- API Base:
-
Use one of Mistral’s models as your base model for a Knowledge or Chat session.
-
OpenWebUI expects OpenAI-like behavior, but Mistral's API:
- Doesn’t support certain fields
- Can’t continue from assistant messages the same way
- Requires stricter message validation
This proxy bridges the gap — giving you a smooth dev experience with OpenWebUI and Mistral models.
- This is a minimal, hacky fix – not a production-grade proxy.
- Tested with
open-webui
(latest) andmistral-small
/mistral-medium
models.