In this video, I’ll show you how to set up comprehensive monitoring for your OPNSense firewall using Prometheus, Grafana, and Grafana Alloy. We’ll collect both basic system metrics and detailed firewall statistics, allowing you to build powerful Grafana dashboards to correlate data and troubleshoot issues effectively.
References#
Instructions#
Deploy the Monitoring Stack#
Create a compose.yaml file with Prometheus and Grafana to store and visualize your metrics.
yaml#
services: grafana: image: docker.io/grafana/grafana-oss:12.3.1 restart: unless-stopped ports: - “3000:3000” volumes: - grafana_data:/var/lib/grafana
prometheus: image: docker.io/prom/prometheus:v3.9.1 restart: unless-stopped command: - –config.file=/etc/prometheus/prometheus.yaml - –storage.tsdb.retention.time=15d - –web.enable-remote-write-receiver ports: - “9090:9090” volumes: - prometheus_data:/prometheus - ./config/prometheus.yaml:/etc/prometheus/prometheus.yaml:ro
volumes: prometheus_data: driver: local grafana_data: driver: local
Scrape System Metrics (Node Exporter)#
- Install the
os-node_exporterplugin inSystem → Firmware → Plugins. - Enable it under
Services → Node Exporter. - Verify the endpoint is active (replace with your OPNsense IP/hostname):
bash curl -s http://your-firewall-ip-address:9100/metrics | head
Configure Grafana Alloy#
Add Grafana Alloy to your compose.yaml to scrape metrics and forward them to Prometheus.
yaml services:
… your other services#
alloy: image: docker.io/grafana/alloy:v1.12.2 restart: unless-stopped ports: - “12345:12345” volumes: - alloy_data:/alloy/data - ./config_alloy/:/etc/alloy.d
volumes:
… your other volumes#
alloy_data: driver: local
Create a config_alloy/ directory next to your compose.yaml file.
config_alloy/targets.alloy
This configures the destination for your metrics.
river prometheus.remote_write “default” { endpoint { url = “http://prometheus:9090/api/v1/write” } }
config_alloy/opnsense_node.alloy
This scrapes the Node Exporter on your OPNsense firewall. Make sure to use a consistent instance label for your dashboards.
river prometheus.scrape “opnsense_node” { targets = [{ address = “your-firewall-ip-address:9100”, instance = “opnsense-main”, job = “node_exporter”, group = “Firewall”, host = “opnsense”, }]
forward_to = [prometheus.remote_write.default.receiver] scrape_interval = “30s” scrape_timeout = “10s” }
Scrape Firewall Metrics (OPNsense API Exporter)#
- Create API User: In OPNsense, go to
System → Access → Usersand create a user (e.g.,prometheus-exporter) with a/sbin/nologinshell. Generate an API Key and Secret for this user. - Assign Permissions: Assign the necessary permissions via
System → Access → Users → Edit user → Effective Privileges. - Enable Unbound Stats (Recommended): Go to
Services → Unbound DNS → Advancedand checkEnable Extended Statistics. - Enable Gateway Monitoring: Go to
System → Gateways → Single, edit your WAN gateways, and enable theEnable Gateway Monitoringoption, setting a monitor IP (e.g.,8.8.8.8).
Run the Exporter
Add the OPNsense exporter to your compose.yaml. Make sure to provide your API credentials.
yaml services:
… your other services#
opnsense-exporter: image: ghcr.io/athennamind/opnsense-exporter:latest container_name: opnsense-exporter restart: unless-stopped ports: - “8080:8080” environment: - OPNSENSE_EXPORTER_OPS_API_KEY=${OPNSENSE_API_KEY} - OPNSENSE_EXPORTER_OPS_API_SECRET=${OPNSENSE_API_SECRET} command: - –opnsense.protocol=https - –opnsense.address=your-firewall-ip-address - –opnsense.insecure - –exporter.instance-label=opnsense-main
Scrape the Exporter with Alloy
Create a new file config_alloy/opnsense_api.alloy.
river prometheus.scrape “opnsense_exporter” { targets = [{ address = “opnsense-exporter:8080”, instance = “opnsense-main”, job = “opnsense_exporter”, group = “Firewall”, host = “opnsense”, }]
forward_to = [prometheus.remote_write.default.receiver] scrape_interval = “30s” scrape_timeout = “10s” }
Import Grafana Dashboards#
Once the metrics are flowing into Prometheus, you can import community dashboards into Grafana. Make sure to adjust the dashboard variables (like job or instance) to match the labels you defined in your Alloy scrape configurations.

