Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 21 additions & 3 deletions docs/getting-started/env-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -469,11 +469,29 @@ allowing the client to wait indefinitely.

- Type: `int`
- Default: `10`
- Description: Sets the timeout in seconds for fetching the model list. This can be useful in cases where network latency requires a longer timeout duration to successfully retrieve the model list.
- Description: Sets the timeout in seconds for fetching the model list from Ollama and OpenAI endpoints. This affects how long Open WebUI waits for each configured endpoint when loading available models.

:::note
:::note When to Adjust This Value

**Lower the timeout** (e.g., `3`) if:
- You have multiple endpoints configured and want faster failover when one is unreachable
- You prefer the UI to load quickly even if some slow endpoints are skipped

**Increase the timeout** (e.g., `30`) if:
- Your model servers are slow to respond (e.g., cold starts, large model loading)
- You're connecting over high-latency networks
- You're using providers like OpenRouter that may have variable response times

:::

:::warning Database Persistence

Connection URLs configured via the Admin Settings UI are **persisted in the database** and take precedence over environment variables. If you save an unreachable URL and the UI becomes unresponsive, you may need to use one of these recovery options:

- `RESET_CONFIG_ON_START=true` - Resets database config to environment variable values on next startup
- `ENABLE_PERSISTENT_CONFIG=false` - Always use environment variables (UI changes won't persist)

The AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST is set to 10 seconds by default to help ensure that all necessary connections are available when opening the web UI. This duration allows enough time for retrieving the model list even in cases of higher network latency. You can lower this value if quicker timeouts are preferred, but keep in mind that doing so may lead to some connections being dropped, depending on your network conditions.
See the [Model List Loading Issues](/troubleshooting/connection-error#️-model-list-loading-issues-slow-ui--unreachable-endpoints) troubleshooting guide for detailed recovery steps.

:::

Expand Down
13 changes: 13 additions & 0 deletions docs/getting-started/quick-start/starting-with-llama-cpp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,19 @@ To control and query your locally running model directly from Open WebUI:

💡 Once saved, Open WebUI will begin using your local Llama.cpp server as a backend!

:::tip Connection Timeout Configuration

If your Llama.cpp server is slow to initialize or you see timeout errors, you can increase the model list fetch timeout:

```bash
# Increase timeout for slower model loading (default is 10 seconds)
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=30
```

If you've saved an unreachable URL and the UI becomes unresponsive, see the [Model List Loading Issues](/troubleshooting/connection-error#️-model-list-loading-issues-slow-ui--unreachable-endpoints) troubleshooting guide.

:::

![Llama.cpp Connection in Open WebUI](/images/tutorials/deepseek/connection.png)

---
Expand Down
13 changes: 13 additions & 0 deletions docs/getting-started/quick-start/starting-with-ollama.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,19 @@ To manage your Ollama instance in Open WebUI, follow these steps:
* **Prefix ID**: If you have multiple Ollama instances serving the same model names, use a prefix (e.g., `remote/`) to distinguish them.
* **Model IDs (Filter)**: Make specific models visible by listing them here (whitelist). Leave empty to show all.

:::tip Connection Timeout Configuration

When using multiple Ollama instances (especially across networks), connection delays can occur if an endpoint is unreachable. You can adjust the timeout using:

```bash
# Lower the timeout (default is 10 seconds) for faster failover
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=3
```

If you've saved an unreachable URL and can't access Settings to fix it, see the [Model List Loading Issues](/troubleshooting/connection-error#️-model-list-loading-issues-slow-ui--unreachable-endpoints) troubleshooting guide.

:::

Here’s what the management screen looks like:

![Ollama Management Screen](/images/getting-started/quick-start/manage-ollama.png)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,19 @@ See [their docs](https://lemonade-server.ai/) for details.
- **API Key**: Leave blank unless the server requires one.
6. Click **Save**.

:::tip Connection Timeout Configuration

If your local server is slow to start or you're connecting over a high-latency network, you can adjust the model list fetch timeout:

```bash
# Adjust timeout for slower connections (default is 10 seconds)
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=5
```

If you've saved an unreachable URL and the UI becomes unresponsive, see the [Model List Loading Issues](/troubleshooting/connection-error#️-model-list-loading-issues-slow-ui--unreachable-endpoints) troubleshooting guide for recovery options.

:::

:::tip

If running Open WebUI in Docker and your model server on your host machine, use `http://host.docker.internal:<your-port>/v1`.
Expand Down
13 changes: 13 additions & 0 deletions docs/getting-started/quick-start/starting-with-openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,19 @@ Once Open WebUI is running:

This securely stores your credentials.

:::tip Connection Timeout Configuration

If your API provider is slow to respond or you're experiencing timeout issues, you can adjust the model list fetch timeout:

```bash
# Increase timeout for slow networks (default is 10 seconds)
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=15
```

If you've saved an unreachable URL and the UI becomes unresponsive, see the [Model List Loading Issues](/troubleshooting/connection-error#️-model-list-loading-issues-slow-ui--unreachable-endpoints) troubleshooting guide for recovery options.

:::

![OpenAI Connection Screen](/images/getting-started/quick-start/manage-openai.png)

---
Expand Down
13 changes: 13 additions & 0 deletions docs/getting-started/quick-start/starting-with-vllm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,16 @@ For remote servers, use the appropriate hostname or IP address.
## Step 3: Start Using Models

Select any model that's available on your vLLM server from the Model Selector and start chatting.

:::tip Connection Timeout Configuration

If your vLLM server is slow to respond (especially during model loading), you can adjust the timeout:

```bash
# Increase timeout for slower model initialization (default is 10 seconds)
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=30
```

If you've saved an unreachable URL and the UI becomes unresponsive, see the [Model List Loading Issues](/troubleshooting/connection-error#️-model-list-loading-issues-slow-ui--unreachable-endpoints) troubleshooting guide.

:::
75 changes: 75 additions & 0 deletions docs/troubleshooting/connection-error.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,81 @@ docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=
```
🔗 After running the above, your WebUI should be available at `http://localhost:8080`.

## ⏱️ Model List Loading Issues (Slow UI / Unreachable Endpoints)

If your Open WebUI takes a long time to load models, or the model selector spins indefinitely, it may be due to an unreachable or slow API endpoint configured in your connections.

### Common Symptoms

- Model selector shows a loading spinner for extended periods
- `500 Internal Server Error` on `/api/models` endpoint
- UI becomes unresponsive when opening Settings
- Docker/server logs show: `Connection error: Cannot connect to host...`

### Cause: Unreachable Endpoints

When you configure multiple Ollama or OpenAI base URLs (for load balancing or redundancy), Open WebUI attempts to fetch models from **all** configured endpoints. If any endpoint is unreachable, the system waits for the full connection timeout before returning results.

By default, Open WebUI waits **10 seconds** per unreachable endpoint when fetching the model list. With multiple bad endpoints, this delay compounds.

### Solution 1: Adjust the Timeout

Lower the timeout for model list fetching using the `AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST` environment variable:

```bash
# Set a shorter timeout (in seconds) for faster failure on unreachable endpoints
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=3
```

This reduces how long Open WebUI waits for each endpoint before giving up and continuing.

### Solution 2: Fix or Remove Unreachable Endpoints

1. Go to **Admin Settings → Connections**
2. Review your Ollama and OpenAI base URLs
3. Remove or correct any unreachable IP addresses or hostnames
4. Save the configuration

### Solution 3: Recover from Database-Persisted Bad Configuration

If you saved an unreachable URL and now can't access the Settings UI to fix it, the bad configuration is persisted in the database and takes precedence over environment variables. Use one of these recovery methods:

**Option A: Reset configuration on startup**
```bash
# Forces environment variables to override database values on next startup
RESET_CONFIG_ON_START=true
```

**Option B: Always use environment variables**
```bash
# Prevents database values from taking precedence (changes in UI won't persist across restarts)
ENABLE_PERSISTENT_CONFIG=false
```

**Option C: Manual database cleanup (advanced)**

If using SQLite, stop the container and run:
```bash
sqlite3 webui.db "DELETE FROM config WHERE id LIKE '%urls%';"
```

:::warning

Manual database manipulation should be a last resort. Always back up your database first.

:::

### Related Environment Variables

| Variable | Default | Description |
|----------|---------|-------------|
| `AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST` | `10` | Timeout (seconds) for fetching model lists |
| `AIOHTTP_CLIENT_TIMEOUT` | `300` | General API request timeout |
| `RESET_CONFIG_ON_START` | `false` | Reset database config to env var values on startup |
| `ENABLE_PERSISTENT_CONFIG` | `true` | Whether database config takes precedence over env vars |

See the [Environment Configuration](/getting-started/env-configuration#aiohttp_client_timeout_model_list) documentation for more details.

## 🔒 SSL Connection Issue with Hugging Face

Encountered an SSL error? It could be an issue with the Hugging Face server. Here's what to do:
Expand Down