Script to Automate Deploying Open WebUI on macOS
Large Language Models (LLMs) are rapidly changing how we interact with information, offering impressive capabilities from brainstorming to summarization. But behind this convenience lies a growing privacy concern, one that echoes the issues we’ve long faced with search engines.
Just like when you type a query into Google, when you interact with an LLM through an API (like those offered by Deepseek, OpenAI, Google, or Anthropic), your questions, the data you provide, and the resulting responses are sent to a server you don’t control. This data is then processed on their infrastructure, potentially logged, analyzed, and used for purposes beyond simply fulfilling your request.
The Parallels to Search Engines:
With search engines, our search history is tracked to personalize ads and to build detailed profiles about us. With API-driven LLMs, the same principle applies. Our prompts become data points and are used by AI providers to build detailed personal profiles about us. While many providers offer data usage policies, understanding exactly how your data is handled can be opaque. This is particularly concerning for confidential business data, personal information, or for industries with strict data privacy regulations.
Taking Back Control: Running AI Locally
Fortunately, there’s a growing movement towards running AI models locally on your own computer. This eliminates the need to send your data to a third-party server, giving you complete control over your information.
Thanks to advancements in the quantization of LLM models to more manageable sizes, running powerful LLMs on your personal hardware (provided that you have enough ram and a good enough graphic processor) is becoming increasingly accessible. We don’t need to use API’s any more to utilize decent LLM’s in our workflow.
Ollama simplifies the process of downloading, running, and managing LLMs locally. It’s probably the easiest way to manage running LLM’s locally and is the tool I use to run LLM’s locally. It handles the complex configuration, supports a growing number of popular open-source LLMs like Llama 2, Deepseek, Gemma, and Mistral, and works on macOS, Linux, and Windows.
With Ollama, you can download a model with a single command and run it directly from your terminal. Here is the link to install it using homebew.
Level Up Your Local LLM Experience with OpenUI
Running an LLM locally is powerful, but the command-line interface isn’t the most user-friendly and gives outputs that are not the most readable (i.e. monospace text with no text formatting) . That’s where OpenUI comes in.
OpenUI is a browser-based front-end designed to connect with local LLMs like those managed by Ollama. It provides a beautiful, intuitive chat interface, manages conversation history, and allows you to customize settings like temperature
and repetition penalty to fine-tune the model’s output. It’s expecially useful for asking for code prompts as it has inbuilt syntax highlighting.
My Installation Script
Below is a bash script for macOS that automatically installs the required packages for Open WebUI to run. The script then sets up a docker container for Open WebUI. After this script has completed, Open WebUI will be accessible in a browser at the address localhost:3000
. Enjoy!
#!/bin/bash
set -e
# Check for Homebrew
if ! command -v brew &>/dev/null; then
echo "Homebrew is not installed."
echo "Please install Homebrew from https://brew.sh and re-run this script."
exit 1
fi
# Check if Docker is installed
if ! command -v docker &>/dev/null; then
echo "Docker is not installed."
echo "Installing Docker Desktop using Homebrew Cask..."
brew install --cask docker
echo "Please launch Docker Desktop and wait until Docker is running, then re-run this script."
exit 1
fi
# Launch Docker Desktop if it's not running
if ! docker info &>/dev/null; then
echo "Docker Desktop is not running. Launching Docker Desktop..."
open -a Docker
wait_for_docker
fi
# Remove any existing open-webui container if present
if docker ps -a --format "{{.Names}}" | grep -q "^open-webui$"; then
echo "Existing 'open-webui' container found. Removing..."
docker rm -f open-webui
fi
# Pull the latest open-webui docker image
echo "Pulling the Open WebUI docker image..."
docker pull ghcr.io/open-webui/open-webui:main
# Run the open-webui container
echo "Deploying Open WebUI..."
docker run --detach \
--publish 3000:8080 \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
echo "Open WebUI has been deployed!"
echo "Access it at: http://localhost:3000"