Ollama run LLM locally

Posted under » ML & AI » Arch on 01 Jan 2026

There are 3 popular ways to run LLM locally and I did try all of them. This is one way and I installed Ollama on my steamdeck Garuda Arch OS.

You can go to Download Ollama the official way or you can use Arch Octopi to download it. If you are using AMD GPU chip like Steamdeck then you should also download 'ollama-rocm' for better performance. Once installed, you follow this instruction for Linux

I prefer to run Olama in a screen like

$ screen
$ ollama serve
# exit screen 
$ ollama -v

It is recommended to create a daemon at service file using vim in /etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"

[Install]
WantedBy=multi-user.target

Then start the service as SU using systemctl

$ systemctl daemon-reload
$ systemctl enable ollama
$ systemctl start ollama
$ systemctl status ollama

You can 'Ollama is running' when you go to this URL

web security linux ubuntu Raspberry   git   javascript css python django drupal php apache mysql  MongoDB AWS data  ML AI