In this tutorial, I’ll walk you through a no-drama Uptime Kuma v2 upgrade checklist for Docker and Docker Compose: backup first, upgrade safely, wait for the migration, and verify everything (monitors + notifications)
Prerequisites#
- A working Uptime Kuma v1 install running in Docker Compose
References#
Before touching anything, skim the official v1 → v2 migration guide:
Key takeaways:
- Stop Uptime Kuma before upgrading.
- Back up
/app/data(seriously: do not skip this). - The first start of v2 triggers a one-time migration.
- Migration time depends on how much history you have. Do not interrupt it.
Important Docker tag gotcha (:latest is still v1):
louislam/uptime-kuma:lateststays on v1- To upgrade to v2, you must explicitly use
:2
Also:
- Do not use
:2-rootlessfor the migration. Rootless images are not recommended for upgrading from v1.
Instructions#
If you are doing a fresh install (new setup)#
If you are not upgrading and you are starting from scratch:
- Start directly on v2.
- Decide whether to stay on SQLite (simple) or use MariaDB (better for larger installs).
If you use my Boilerplates templates:
boilerplates compose list
boilerplates compose show uptimekuma
boilerplates compose generate uptimekuma -o /tmp/uptimekuma
Upgrade an existing Docker Compose install (v1 → v2)#
Below is a minimal example of a v1 compose service (this is what many setups look like):
---
services:
uptimekuma:
image: docker.io/louislam/uptime-kuma:1
environment:
- TZ=Europe/Berlin
ports:
- 3001:3001
volumes:
- uptimekuma-data:/app/data
healthcheck:
test: ["CMD", "curl", "-f", "<http://localhost:3001>"]
interval: 30s
retries: 3
start_period: 10s
timeout: 5s
restart: unless-stopped
volumes:
uptimekuma-data:
driver: local
Stop the Stack:
docker compose down
Create a backup tarball of /app/data:
docker run --rm \
--volume uptimekuma_uptimekuma-data:/app/data \
--volume $(pwd):/backup \
busybox tar czf /backup/backup-v1.tar.gz -C /app/data .
Quick sanity check by extracting it:
mkdir -p backup-v1
tar xfz backup-v1.tar.gz -C backup-v1
ls -la backup-v1
Switch the image tag to v2
Change:
louislam/uptime-kuma:1→louislam/uptime-kuma:2
Then start it again:
docker compose up -d
Watch logs and let the migration finish
Warning: Do not stop the container while it is migrating.
(Optional) Move from SQLite to MariaDB#
If your instance is growing, MariaDB can be worth it. A practical approach is:
- Upgrade to v2 first (let the migration finish).
- Take another backup.
- Then switch your deployment to a MariaDB-backed setup.
Take a post-migration backup
docker compose down
docker volume ls | grep uptime
docker run --rm \\
--volume uptimekuma_uptimekuma-data:/app/data \\
--volume $(pwd):/backup \\
busybox tar czf /backup/backup-v2.tar.gz -C /app/data .
mkdir -p backup-v2
tar xfz backup-v2.tar.gz -C backup-v2
Generate a new compose stack with MariaDB
If you use Boilerplates:
mv compose.yaml old.compose.yaml
boilerplates compose generate uptimekuma -o /tmp/uptimekuma
Expose MariaDB temporarily if you need to import data:
- Add
ports: - 3306:3306to the MariaDB service.
Start only the database first:
docker compose up -d uptimekuma_db
Import the SQLite database into MariaDB
One option is sqlite3-to-mysql:
Example import:
sqlite3mysql \\
--sqlite-file ./backup-v2/kuma.db \\
--mysql-user uptimekuma \\
--mysql-password YOUR_PASSWORD \\
--mysql-database uptimekuma \\
--mysql-host localhost \\
--mysql-port 3306
Then start everything:
# Optional if you are rebuilding from scratch:
# docker volume rm uptimekuma_uptimekuma-data
docker compose up -d
Verification#
After the upgrade, validate the important stuff:
- You can log in and the UI loads
- Existing monitors are present
- History and uptime stats are visible
- Notifications still fire
A simple test is to:
- Add a temporary monitor (or stop a test container)
- Confirm the alert arrives in your notification provider
Troubleshooting / common gotchas#
- Pulled
:latestand nothing changed: that is expected. Use:2for v2. - Migration seems stuck: large instances can take a long time. Keep logs open and be patient.
- Interrupted migration: restore the
/app/databackup and start over. - Rootless image issues: avoid
:2-rootlessduring migration.

