This is great – I can’t recall how I did part of this….duh. You have to install:
- Docker
- Ollama
- OpenWeb GUI
- …and download a model (of course)
Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
I think you can install Docker first – in the case of these two programs I’m not sure it matters.
Downloading a Model and Testing in a Terminal
I originally downloaded (and ran) the deepseek-r1 model by simple typing the following in a terminal:
ollama run deepseek-r1
I could then run DeepSeek from a command line.
The Problems
To see results in amore polished format I then tried all sorts of things trying to get docker installed, and, working with OpenWeb GUI. The big problem is that I cannot recall exactly how I installed OpenWeb GUI nor Docker.
I think this was how I installed Docker (Mint 20.04)
sudo apt install docker.io
Then:
sudo usermod -a -G docker guy
Restart the docker service:
sudo systemctl restart docker
You can see if the service is running with:
sudo systemctl status docker
Installing OpenWeb GUI
Okay – this gets weird. Download this archive, unarchive, and enter the directory – download. The following is taken from the web and I don’t recall what I ran:
-
If Ollama is on your computer, use this command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
-
If Ollama is on a Different Server, use this command:
To connect to Ollama on another server, change the
OLLAMA_BASE_URL
to the server’s URL:docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
-
To run Open WebUI with Nvidia GPU support, use this command:
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
Connecting Things
if all goes well you should be able to open a browser and go to http://localhost:3000 where you’ll be asked to create a username and pass.
I did that but there were no models listed in the drop down box at the top of the page. Eventually, I pasted “deepseek-r1” into the form and it recognized it and asked if I wanted it installed. I said yes, and, it downloaded.
Connection Problem
So it took awhile to download and, then, a red box popped in the right hand, top corner (after I’d waited for the thing to download) and it said there was some sort of connection problem.
This, from the web, fixed the thing:
I am facing the same problem. By default, Ollama only listens to the local IP address 127.0.0.1, which means that the network inside the Docker container cannot bind to the host port 11434.
To resolve this issue, you can follow these steps:
- Open the ollama unit file located at
/etc/systemd/system/ollama.service
.- In the [Service] section of the file, add the following line:
Environment="OLLAMA_HOST=0.0.0.0"
.- Reload systemctl
sudo systemctl daemon-reload
- Restart the service
sudo systemctl restart ollama.service