diff --git a/docs/getting-started/index.md b/docs/getting-started/index.md index 2b1c4b1a2..3235614c7 100644 --- a/docs/getting-started/index.md +++ b/docs/getting-started/index.md @@ -7,28 +7,85 @@ import { TopBanners } from "@site/src/components/TopBanners"; ---- +## Getting Started -Welcome to the **Open WebUI Documentation Hub!** Below is a list of essential guides and resources to help you get started, manage, and develop with Open WebUI. +Everything you need to go from zero to running—and beyond. --- ## ⏱️ Quick Start -Get up and running quickly with our [Quick Start Guide](/getting-started/quick-start). +Get Open WebUI running, then connect it to your models. Most users start here. + +**What's inside:** + +- Docker installation (with and without bundled Ollama) +- Python/uv installation +- Connecting to model providers (Ollama, OpenAI, vLLM, Llama.cpp, and more) +- Your first custom function + +Whether you're running locally or connecting to external APIs, this is your launchpad. + +[Go to Quick Start →](/getting-started/quick-start) --- ## 🛠️ Advanced Topics -Take a deeper dive into configurations and development tips in our [Advanced Topics Guide](/getting-started/advanced-topics). +Running in production? Securing for your team? Go deeper. + +**What's inside:** + +- Local development and contributing +- Network architecture and diagrams +- Logging and debugging +- HTTPS and reverse proxy setup +- Monitoring and observability (OpenTelemetry) + +Yeah, it's a few steps. Worth it though. + +[Go to Advanced Topics →](/getting-started/advanced-topics) --- ## 🔄 Updating Open WebUI -Stay current with the latest features and security patches with our [Updating Open WebUI](./updating) guide. +Already deployed? Stay current with the latest features and security patches. + +**What's inside:** + +- Manual update process +- Automated updates with Watchtower +- Version pinning for production stability + +[Go to Updating Guide →](/getting-started/updating) --- -Happy exploring! 🎉 If you have questions, join our [community](https://discord.gg/5rJgQTnV4s) or raise an issue on [GitHub](https://github.com/open-webui/open-webui). +## 💡 Explore What's Possible + +Already up and running? Ready to go deeper? + +- **[Upload documents and use RAG →](\features\rag\index.md)** Ground responses in your data +- **[Enable web search →](\features\web-search\_category_.json)** Let models access current information _(SearXNG Recommended)_ +- **[Generate images →](\features\image-generation-and-editing\_category_.json)** Connect to AUTOMATIC1111, ComfyUI, or DALL-E _(ComfyUI Recommended)_ +- **[Explore all features →](/features)** Voice, tools, pipelines, and more + +--- + +Looking for something specific? + +| Resource | Description | +|----------|-------------| +| **[Environment Variables](/getting-started/env-configuration)** | Every configuration option in one place | +| **[API Endpoints](/getting-started/api-endpoints)** | Integrate programmatically, automate workflows | + +--- + +## Need Help? + +Stuck? We've got you. + +- **[Troubleshooting →](/troubleshooting)** How and where to get support +- **[Discord →](https://discord.gg/5rJgQTnV4s)** Ask the community or our trusty support bot for advice +- **[GitHub Issues →](https://github.com/open-webui/open-webui/issues)** Report bugs \ No newline at end of file diff --git a/docs/getting-started/quick-start/index.mdx b/docs/getting-started/quick-start/index.mdx index 91195bb66..a555b55d6 100644 --- a/docs/getting-started/quick-start/index.mdx +++ b/docs/getting-started/quick-start/index.mdx @@ -24,156 +24,239 @@ import PythonUpdating from './tab-python/PythonUpdating.md'; -:::info +# Quick Start -**Important Note on User Roles and Privacy:** +Let's get you running. Pick your deployment method—all paths lead to localhost. -- **Admin Creation:** The first account created on Open WebUI gains **Administrator privileges**, controlling user management and system settings. -- **User Registrations:** Subsequent sign-ups start with **Pending** status, requiring Administrator approval for access. -- **Privacy and Data Security:** **All your data**, including login details, is **locally stored** on your device. Open WebUI ensures **strict confidentiality** and **no external requests** for enhanced privacy and security. - - **All models are private by default.** Models must be explicitly shared via groups or by being made public. If a model is assigned to a group, only members of that group can see it. If a model is made public, anyone on the instance can see it. +--- + +## Choose Your Path + +| Method | Best for | Time to deploy | +|--------|----------|----------------| +| **Docker** | Most users. Clean, isolated, officially supported. | ~5 minutes | +| **Python** | Low-resource environments, development, or no Docker available. | ~10 minutes | +| **Kubernetes** | Production scale, orchestration, enterprise deployments. | 30+ minutes | +| **Third-party** | One-click installers like Pinokio. Community-supported. | Varies | + +--- + +## Before You Begin + +:::info Admin & User Roles + +The first account created on Open WebUI gets **Administrator privileges**—this controls user management, system settings, and permissions for your entire instance. + +Subsequent sign-ups start with **Pending** status, requiring Administrator approval for access. Plan your admin account before sharing access. + +::: + +:::tip Your Data Stays Yours + +**All your data**—including login details, conversations, and documents—is **stored locally** on your infrastructure. Open WebUI makes no external requests. No telemetry. No phoning home. + +**Models are private by default.** Models must be explicitly shared via groups or made public. If assigned to a group, only members see it. If made public, anyone on your instance can access it. ::: -Choose your preferred installation method below: +--- + +## Installation + + + -- **Docker:** **Officially supported and recommended for most users** -- **Python:** Suitable for low-resource environments or those wanting a manual setup -- **Kubernetes:** Ideal for enterprise deployments that require scaling and orchestration +**The recommended path.** Docker keeps Open WebUI isolated and portable. Works on Linux, macOS, and Windows. - - - -
- - -
-
- - -
- -
-
- - -
- -
-
- - -
- -
-
- - -
- -
-
- - -
- -
-
- - -
- -
-
- -
- -
-
-
+ +
+ + +
+
+ + +
+ +
+
+ + +
+ +
+
+ + +
+ +
+
+ + +
+ +
+
+ + +
+ +
+
+ + +
+ +
+
+ +
+ +
+
+ +
+ - - -
- -
- -
- - -
- -
- -
- - -
- -
- - -
- - -
-

Development Setup

-

- For developers who want to contribute, check the Development Guide in Advanced Topics. -

-
-
- -
+ +**No Docker? No problem.** Run Open WebUI directly as a Python application. Good for development, debugging, or constrained environments. + +**Prerequisites:** Python 3.11+ + + + +
+ + +
+
+ + +
+ + +
+
+ + +
+ + +
+
+ + +
+

Development Setup

+

+ Want to contribute? Check the Development Guide in Advanced Topics for build instructions, hot-reloading, and contribution guidelines. +

+
+
+
+
- - - - - + +**Built for scale.** Deploy Open WebUI on Kubernetes with Helm for production-grade orchestration, high availability, and enterprise requirements. + +**Prerequisites:** Kubernetes cluster, Helm 3.x, `kubectl` configured + +
+ +
+
- - - - ### Pinokio.computer Installation + + +**Community-supported options.** These installers are maintained by third parties—support is provided through their respective channels. + + + + +### Pinokio.computer - For installation via Pinokio.computer, please visit their website: +One-click installation via Pinokio: - [https://pinokio.computer/](https://pinokio.computer/) +[https://pinokio.computer/](https://pinokio.computer/) - Support for this installation method is provided through their website. - - +For support, visit their website or community channels. - ### Additional Third-Party Integrations + + + +*Additional third-party integrations will be listed here as they become available.* - *(Add information about third-party integrations as they become available.)*
-## Next Steps +--- + +## Installation Complete 🎉 + +You're in. Open WebUI is running at: + +- **Docker:** [http://localhost:3000](http://localhost:3000) +- **Python:** [http://localhost:8080](http://localhost:8080) + +Now let's make it _yours_... + +--- + +## Connect Your Models + +Open WebUI needs something to talk to. Choose your path: + +| Provider | Guide | +|----------|-------| +| **Ollama** — The easiest way to run models locally | [Ollama Setup →](/getting-started/quick-start/starting-with-ollama) | +| **OpenAI** — Connect directly to OpenAI's latest models | [OpenAI Setup →](/getting-started/quick-start/starting-with-openai) | +| **OpenAI-Compatible APIs** — Gemini, vLLM, Llama.cpp, LiteLLM, and any provider using the OpenAI format (local or remote) | [OpenAI-Compatible Setup →](/getting-started/quick-start/starting-with-openai-compatible) | +| **Functions** — Direct API access to providers like Claude and Bedrock. Build custom tools, integrations, and pipelines. | [Functions Setup →](/getting-started/quick-start/starting-with-functions) | + +:::tip Not sure which to choose? +You only need one to get started—pick whichever fits your setup today. **Ollama** is the fastest path to local models. **OpenAI-Compatible** works with most providers, local or cloud. **Functions** for providers without OpenAI-compatible endpoints, or when you want to build something custom. -After installing, visit: +The best part? Mix and match later. Add more providers anytime. +::: + +_See provider specific connectivity guides in the sidebar._ + +--- + +## Complete Your Setup -- [http://localhost:3000](http://localhost:3000) to access Open WebUI. -- or [http://localhost:8080/](http://localhost:8080/) when using a Python deployment. +A few more steps to get the most out of Open WebUI: -You are now ready to start using Open WebUI! +| Task | Why it matters | +|------|----------------| +| **Set up remote access** | Access Open WebUI from any device, anywhere—securely. We recommend [Tailscale](https://tailscale.com/). | +| **[Install the PWA on other devices](\features\index.mdx#key-features-of-open-webui-)** | Native app experience on mobile and desktop. Requires HTTPS. | +| **[Add users and set permissions](\features\rbac\index.mdx)** | Ready to share? Set up your team. | -## Using Open WebUI with Ollama -If you're using Open WebUI with Ollama, be sure to check out our [Starting with Ollama Guide](/getting-started/quick-start/starting-with-ollama) to learn how to manage your Ollama instances with Open WebUI. +--- + +## Explore What's Possible + +You're running. You're connected. Ready to go deeper? -## Join the Community +- **[Upload documents and use RAG →](\features\rag\index.md)** Ground responses in your data +- **[Enable web search →](\features\web-search\_category_.json)** Let models access current information _(SearXNG Recommended)_ +- **[Generate images →](\features\image-generation-and-editing\_category_.json)** Connect to AUTOMATIC1111, ComfyUI, or DALL-E _(ComfyUI Recommended)_ +- **[Explore all features →](/features)** Voice, tools, pipelines, and more + +--- -Need help? Have questions? Join our community: +## Need Help? -- [Open WebUI Discord](https://discord.gg/5rJgQTnV4s) -- [GitHub Issues](https://github.com/open-webui/open-webui/issues) +Stuck? It happens. We've got you. -Stay updated with the latest features, troubleshooting tips, and announcements! +- **[Troubleshooting →](/troubleshooting)** How and where to get support +- **[Discord →](https://discord.gg/5rJgQTnV4s)** Ask the community or our trusty support bot for advice +- **[GitHub Issues →](https://github.com/open-webui/open-webui/issues)** Report bugs \ No newline at end of file diff --git a/docs/intro.mdx b/docs/intro.mdx index e480508df..291cdb0a9 100644 --- a/docs/intro.mdx +++ b/docs/intro.mdx @@ -10,14 +10,13 @@ import { SponsorList } from "@site/src/components/SponsorList"; # Open WebUI +## AI that answers to you. -**Open WebUI is an [extensible](https://docs.openwebui.com/features/plugin/), feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline.** It supports various LLM runners like **Ollama** and **OpenAI-compatible APIs**, with **built-in inference engine** for RAG, making it a **powerful AI deployment solution**. +Open WebUI is an [extensible](https://docs.openwebui.com/features/plugin/), feature-rich, self-hosted AI platform that runs on your hardware, connects to your models, and keeps your data exactly where it belongs—with you. +No cloud dependency. No forced telemetry. No vendor lock-in. [![Open WebUI Banner](/images/banner.png)](https://openwebui.com) -Passionate about open-source AI? [Join our team →](https://careers.openwebui.com/) - - ![GitHub stars](https://img.shields.io/github/stars/open-webui/open-webui?style=social) ![GitHub forks](https://img.shields.io/github/forks/open-webui/open-webui?style=social) ![GitHub watchers](https://img.shields.io/github/watchers/open-webui/open-webui?style=social) @@ -28,6 +27,8 @@ Passionate about open-source AI? [Join our team →](https://careers.openwebui.c [![Discord](https://img.shields.io/badge/Discord-Open_WebUI-blue?logo=discord&logoColor=white)](https://discord.gg/5rJgQTnV4s) [![Image Description](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/tjbck) +Passionate about decentralizing AI? [Join our team →](https://careers.openwebui.com/) + ![Open WebUI Demo](/images/demo.png) :::tip @@ -37,214 +38,195 @@ Passionate about open-source AI? [Join our team →](https://careers.openwebui.c Get **enhanced capabilities**, including **custom theming and branding**, **Service Level Agreement (SLA) support**, **Long-Term Support (LTS) versions**, and **more!** ::: +--- - - -## Quick Start with Docker 🐳 - -:::info - -**WebSocket** support is required for Open WebUI to function correctly. Ensure that your network configuration allows WebSocket connections. +## What is Open WebUI? -::: +The operating system for AI—self-hosted, extensible, and entirely yours. -**If Ollama is on your computer**, use this command: +Open WebUI is where you bring your AI stack together. Connect any model—local or cloud, open or proprietary—and interact with them through one interface you control. Chat, generate images, talk with voice, search the web, ground responses in your own documents. Add more capabilities when you need them. Remove what you don't. -```bash -docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main -``` +Run it on your laptop for personal use. Deploy it for your team with roles and permissions. Scale it across your organization. The platform adapts to how *you* work, not the other way around. -**To run Open WebUI with Nvidia GPU support**, use this command: +Everything stays on your infrastructure. Your conversations, your documents, your data—none of it leaves unless you choose to connect to external services. We don't phone home. We don't know what you're building. That's the point. -```bash -docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda -``` +**Private by default. Connected by choice.** -For environments with limited storage or bandwidth, Open WebUI offers slim image variants that exclude pre-bundled models. These images are significantly smaller but download required models on first use: +[Explore all features →](/features) · [Get started →](/getting-started) -```bash -docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main-slim -``` +--- -### Open WebUI Bundled with Ollama +## Quick Start with Docker 🐳 -This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup: +Pick your path. All commands are copy-paste ready. -- **With GPU Support**: - Utilize GPU resources by running the following command: +:::info - ```bash - docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama - ``` +**WebSocket** support is required for Open WebUI to function correctly. Ensure that your network configuration allows WebSocket connections. -- **For CPU Only**: - If you're not using a GPU, use this command instead: +::: - ```bash - docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama - ``` +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; -Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly. + + -After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄 +**Use this if:** You already have Ollama running locally, or you're connecting exclusively through external APIs ( OpenRouter, OpenAI, Anthropic, Azure, etc.) -### Using image tags in production + + -If you want to always run the latest version of Open WebUI, you can use the `:main`, `:cuda`, or `:ollama` image tags, depending on your setup, as shown in the examples above. -For `production environments` where stability and reproducibility are important, it is recommended to pin a specific release version instead of using these floating tags. Versioned images follow this format: +Use this if you have an Nvidia GPU and want GPU-accelerated features in Open WebUI. -``` -ghcr.io/open-webui/open-webui:- +```bash +docker run -d -p 3000:8080 --gpus all \ + --add-host=host.docker.internal:host-gateway \ + -v open-webui:/app/backend/data \ + --name open-webui \ + --restart always \ + ghcr.io/open-webui/open-webui:cuda ``` -Examples (pinned versions for illustration purposes only): -``` -ghcr.io/open-webui/open-webui:v0.6.42 -ghcr.io/open-webui/open-webui:v0.6.42-ollama -ghcr.io/open-webui/open-webui:v0.6.42-cuda -``` + + -### Using the Dev Branch 🌙 +No Nvidia GPU? This works on any machine. -:::warning +```bash +docker run -d -p 3000:8080 \ + --add-host=host.docker.internal:host-gateway \ + -v open-webui:/app/backend/data \ + --name open-webui \ + --restart always \ + ghcr.io/open-webui/open-webui:main +``` -The `:dev` branch contains the latest unstable features and changes. Use it at your own risk as it may have bugs or incomplete features. + + -::: + + -If you want to try out the latest bleeding-edge features and are okay with occasional instability, you can use the `:dev` tag like this: +**Use this if:** You want everything in one container. Open WebUI will install and manage Ollama for you. -```bash -docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:dev -``` + + -For the slim variant of the dev branch: +Use this if you have a GPU. Ollama will use it for faster model inference. ```bash -docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:dev-slim +docker run -d -p 3000:8080 --gpus all \ + -v ollama:/root/.ollama \ + -v open-webui:/app/backend/data \ + --name open-webui \ + --restart always \ + ghcr.io/open-webui/open-webui:ollama ``` -### Updating Open WebUI + + -To update Open WebUI container easily, follow these steps: +No GPU? Ollama runs on CPU too—slower inference, but it works. -#### Manual Update -Use [Watchtower](https://github.com/nickfedor/watchtower) to update your Docker container manually: ```bash -docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nickfedor/watchtower --run-once open-webui +docker run -d -p 3000:8080 \ + -v ollama:/root/.ollama \ + -v open-webui:/app/backend/data \ + --name open-webui \ + --restart always \ + ghcr.io/open-webui/open-webui:ollama ``` -#### Automatic Updates -Keep your container updated automatically every 5 minutes: -```bash -docker run -d --name watchtower --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock nickfedor/watchtower --interval 300 open-webui -``` + + -🔧 **Note**: Replace `open-webui` with your container name if it's different. + + -## Manual Installation +Once it's running, open [localhost:3000](http://localhost:3000). That's it—you're in. -:::info - -### Platform Compatibility -Open WebUI works on macOS, Linux (x86_64 and ARM64, including Raspberry Pi and other ARM boards like NVIDIA DGX Spark), and Windows. - -::: +**Need more options?** Kubernetes, Python install, air-gapped deployment, and more: -There are two main ways to install and run Open WebUI: using the `uv` runtime manager or Python's `pip`. While both methods are effective, **we strongly recommend using `uv`** as it simplifies environment management and minimizes potential conflicts. +[See all installation methods →](/getting-started/quick-start) -### Installation with `uv` (Recommended) +--- -The `uv` runtime manager ensures seamless Python environment management for applications like Open WebUI. Follow these steps to get started: +## What You Can Do -#### 1. Install `uv` +### Connect Any Model -Pick the appropriate installation command for your operating system: +Ollama, OpenAI, Azure, Anthropic, local GGUF files, OpenAI-compatible APIs—bring whatever models you want. Mix local and remote models for different use cases. Compare models side by side. Switch between them mid-conversation. -- **macOS/Linux**: - ```bash - curl -LsSf https://astral.sh/uv/install.sh | sh - ``` +[Connect your first model →](/getting-started/quick-start) -- **Windows**: - ```powershell - powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" - ``` +### Ground Responses with _Your_ Context -#### 2. Run Open WebUI +Upload documents. Build knowledge bases. RAG is built in—no external vector database required (though you can bring one if you want). -Once `uv` is installed, running Open WebUI is a breeze. Use the command below, ensuring to set the `DATA_DIR` environment variable to avoid data loss. Example paths are provided for each platform: +[Deep dive into RAG →](/features/rag) -- **macOS/Linux**: - ```bash - DATA_DIR=~/.open-webui uvx --python 3.11 open-webui@latest serve - ``` +### Actually Own Your Data -- **Windows**: - ```powershell - $env:DATA_DIR="C:\open-webui\data"; uvx --python 3.11 open-webui@latest serve - ``` +Your conversations are stored locally. Your documents stay on your infrastructure. We don't phone home. We don't collect telemetry. We literally don't know what you're building. You own your data. You own your models. You own your conversations. -:::note +### Run Anywhere -**For PostgreSQL Support:** +Docker, Kubernetes, bare metal, your laptop, an air-gapped server in a basement. GPU acceleration when you have it, CPU-only when you don't. -The default installation now uses a slimmed-down package. If you need **PostgreSQL support**, install with all optional dependencies: +[Explore installation methods →](/getting-started/quick-start) -```bash -pip install open-webui[all] -``` +--- -::: +## Built For -### Installation with `pip` +**Privacy-conscious users** +Your conversations and confidential data aren't training data for someone else's model. -For users installing Open WebUI with Python's package manager `pip`, **it is strongly recommended to use Python runtime managers like `uv` or `conda`**. These tools help manage Python environments effectively and avoid conflicts. +**Teams that need control** +Role-based access, SSO integration, audit logs. Deploy on your infrastructure, under your compliance requirements. -Python 3.11 is the development environment. Python 3.12 seems to work but has not been thoroughly tested. Python 3.13 is entirely untested and some dependencies do not work with Python 3.13 yet—**use at your own risk**. +**Developers who build** +Python functions, tool integrations, pipelines. Extend it. We made that easy on purpose. -1. **Install Open WebUI**: +**Anyone tired of asking permission** +Run the models you want. Keep the data you generate. No vendor lock-in. No cloud dependency. No forced telemetry. - Open your terminal and run the following command: - ```bash - pip install open-webui - ``` +[Read the roadmap →](/roadmap) -2. **Start Open WebUI**: +--- - Once installed, start the server using: - ```bash - open-webui serve - ``` +## Free to Use. Built in Public. -### Updating Open WebUI +Open WebUI is **free to use**—no trial, no paywall, no "contact sales to unlock." The full platform, yours to deploy today. -To update to the latest version, simply run: +Our source code is public. Our roadmap is informed by the community. We believe AI infrastructure is too important to be locked behind enterprise agreements—so the core platform stays free and accessible to everyone, forever. -```bash -pip install --upgrade open-webui -``` +**119,000+** GitHub stars. Active Discord. Regular releases. What we build next is up to all of us. -This method installs all necessary dependencies and starts Open WebUI, allowing for a simple and efficient setup. After installation, you can access Open WebUI at [http://localhost:8080](http://localhost:8080). Enjoy! 😄 +[GitHub](https://github.com/open-webui/open-webui) · [Discord](https://discord.gg/5rJgQTnV4s) · [Contribute](/contributing) · [Sponsor the project](https://github.com/sponsors/tjbck) -## Other Installation Methods +--- -We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance. +## Enterprise Ready -Continue with the full [getting started guide](/getting-started). +Open WebUI powers AI deployments at startups and Fortune 500s alike. When community support isn't enough, we're here. -### Desktop App +**What you get:** +- **Architecture for scale** — Multi-node HA, 99.99% uptime targets, regional deployments +- **Security & compliance** — SOC 2, HIPAA, GDPR, FedRAMP-ready. Air-gapped and on-prem options +- **Your brand, your platform** — White-label the interface. Integrate proprietary models. Make it yours +- **Support that answers** — SLAs, dedicated contacts, long-term support versions -We also have an **experimental** desktop app, which is actively a **work in progress (WIP)**. While it offers a convenient way to run Open WebUI natively on your system without Docker or manual setup, it is **not yet stable**. +[Explore Enterprise →](/enterprise) -👉 For stability and production use, we strongly recommend installing via **Docker** or **Python (`uv` or `pip`)**. +--- ## Sponsors 🙌 -We are incredibly grateful for the generous support of our sponsors. Their contributions help us to maintain and improve our project, ensuring we can continue to deliver quality work to our community. Thank you! - +Open WebUI exists because of the people and organizations who back it. Their contributions help us to maintain and improve our project, ensuring we can continue to deliver quality work to our community. Thank you! ## Acknowledgements 🙏 @@ -259,8 +241,6 @@ We are deeply grateful for the generous grant support provided by: A16z Open Source AI Grant 2025 - -
![Mozilla](/sponsors/grants/mozilla.png) @@ -268,7 +248,6 @@ We are deeply grateful for the generous grant support provided by: Mozilla Builders 2024 -