StopLiga reads the public block status from hayahora.futbol and keeps one managed route or rule called StopLiga in sync.
It uses Hayahora's canonical JSON feed and derives destinations from the active entries in that structured payload.
Supported routers:
unifiomadaopnsense
If you can run Docker, you can usually run StopLiga.
- Copy
.env.exampleto.env - In
.env, choose your router and fill only that section - Start it
cp .env.example .env
docker compose pull
docker compose up -d
docker compose logs -fMost users only need .env.
config.toml is optional and only useful if you want to keep non-secret settings out of .env.
- creates the managed policy route if it does not exist
- updates it when the IP list changes
- enables it when blocks are active
- disables it when blocks end
- creates or updates managed IP groups
- creates or updates the managed policy route
- enables or disables it from the live status
- updates a managed alias with the published IP list
- enables or disables an existing firewall rule with description
StopLiga
You need:
- a reachable UniFi gateway or controller
- a local UniFi API key
- at least one VPN Client network already created in UniFi
Minimal .env:
STOPLIGA_BACKEND=unifi
STOPLIGA_CONTROLLER_HOST=10.0.1.1
STOPLIGA_SITE=default
STOPLIGA_CONTROLLER_VERIFY_TLS=false
UNIFI_API_KEY=replace-me
STOPLIGA_RUN_MODE=loop
STOPLIGA_SYNC_INTERVAL_SECONDS=300
STOPLIGA_ROUTE_NAME=StopLigaFor UniFi Network 10.3.x upgrade notes and a post-upgrade smoke test, see docs/unifi-network-10.3-validation.md.
For most UniFi setups, the minimal .env above is enough. STOPLIGA_ROUTE_NAME is usually the only route-specific setting you need.
Advanced UniFi bootstrap overrides (most users can ignore these):
- Leave both
STOPLIGA_VPN_NAMEandSTOPLIGA_TARGETSunset to auto-pick the first VPN Client network and target all clients. - Set only
STOPLIGA_VPN_NAMEto pick the VPN Client network explicitly and still target all clients. - Set
STOPLIGA_VPN_NAMEandSTOPLIGA_TARGETStogether to limit the route to specific clients. STOPLIGA_TARGETSaccepts client hostnames, display names or MAC addresses. It does not accept network names.
You need:
- an Omada Controller with Open API enabled
- Client ID, Client Secret and Omada ID
- a WAN or VPN target already created in Omada
Minimal .env:
STOPLIGA_BACKEND=omada
STOPLIGA_CONTROLLER_HOST=omada-controller.example
STOPLIGA_CONTROLLER_PORT=8043
STOPLIGA_SITE=Default
STOPLIGA_CONTROLLER_VERIFY_TLS=true
OMADA_CLIENT_ID=replace-me
OMADA_CLIENT_SECRET=replace-me
OMADA_CONTROLLER_ID=replace-me
OMADA_TARGET_TYPE=vpn
OMADA_TARGET=WG Main
STOPLIGA_RUN_MODE=loop
STOPLIGA_SYNC_INTERVAL_SECONDS=300
STOPLIGA_ROUTE_NAME=StopLigaYou need:
- a reachable OPNsense firewall
- an API key and API secret
- one firewall rule created once in
Firewall > Rules [new]orFirewall > Automation > Filter - that rule must use the exact description
StopLiga
Important notes:
- StopLiga uses the OPNsense filter API to find and toggle that rule
- a rule created only in the legacy
Firewall > Rulesview may be visible in the UI but not discoverable through the API StopLiga uses
Minimal .env:
STOPLIGA_BACKEND=opnsense
OPNSENSE_HOST=fw.example.local
OPNSENSE_API_KEY=replace-me
OPNSENSE_API_SECRET=replace-me
OPNSENSE_VERIFY_TLS=true
STOPLIGA_RUN_MODE=loop
STOPLIGA_SYNC_INTERVAL_SECONDS=300
STOPLIGA_ROUTE_NAME=StopLiga.env: easiest option and recommended for most peopleconfig/config.toml: optional starter config file if you prefer./data: where Docker stores state and health information
You can skip this section if .env is enough for you.
If you want to use a config file:
mkdir -p config
cp config.example.toml config/config.tomlThen keep secrets in .env and non-secret settings in config/config.toml.
Environment variables still override config.toml.
The repo already includes docker-compose.yml, so the normal workflow is:
docker compose up -d
docker compose logs -f
docker compose pull && docker compose up -d
docker compose downWhat it does:
- mounts
./datato store runtime state - optionally mounts
./config - automatically uses
config/config.tomlif that file exists
If you prefer docker run:
docker run -d \
--name stopliga \
--restart unless-stopped \
--env-file .env \
-v "$(pwd)/data:/data" \
-v "$(pwd)/config:/config:ro" \
ghcr.io/jcastro/stopliga:0.1.29The /config mount is optional.
StopLiga can also notify through:
- Gotify
- ntfy
- Telegram
When notifications are configured and StopLiga runs in loop mode, it sends a startup test message once when the service begins so you can verify delivery without waiting for the next route change.
Most users can ignore notifications until the main sync is working.
Minimal ntfy .env:
STOPLIGA_NTFY_URL=https://ntfy.sh
STOPLIGA_NTFY_TOPIC=replace-me-topic
# STOPLIGA_NTFY_TOKEN=replace-me
STOPLIGA_NTFY_PRIORITY=3StopLiga refuses to apply an unexpectedly huge feed. The default ceiling is 16384 destinations. If the public list grows again before you update the container image, set STOPLIGA_MAX_DESTINATIONS=16384 or a higher value in .env.
For Omada, StopLiga splits destinations across managed IP Groups. The default OMADA_GROUP_SIZE=32 lines up with the global feed ceiling and the conservative 512-group safety guard.
By default StopLiga reads active destinations from Hayahora's structured status feed, limited to the last 24 hours.
Set your ISP to keep only active entries for that provider:
STOPLIGA_HAYAHORA_ISP=DIGIIf STOPLIGA_HAYAHORA_ISP is unset, StopLiga includes active entries for all ISPs in the Hayahora payload.
The lookback window defaults to 24 hours. Most users should leave it as-is, but it can be adjusted:
STOPLIGA_HAYAHORA_LOOKBACK_HOURS=24New setups should use:
STOPLIGA_BACKENDSTOPLIGA_CONTROLLER_HOSTSTOPLIGA_CONTROLLER_PORTSTOPLIGA_SITESTOPLIGA_CONTROLLER_VERIFY_TLS
Older variable names still work for compatibility.
The repo includes three starter files:
.env.example: simple.envexample for common Docker setupsconfig.example.toml: simple optional config filedocker-compose.yml: compose file for the normal Docker setup
With STOPLIGA_SYNC_INTERVAL_SECONDS=300, each loop does this:
- resolves the current block status
- downloads the current IP/CIDR list
- compares the desired feed state against the selected backend
- enables or disables the managed route or rule
- updates the managed destinations if needed
Recommended local workflow:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install -e ".[dev]"
python -m ruff check src tests
python -m ruff format --check src tests
python -m mypy src
python -m pytest
python -m pip_audit
python -m compileall run_stopliga.py src testsIf the environment is already prepared:
.venv/bin/python -m ruff check src tests
.venv/bin/python -m ruff format --check src tests
.venv/bin/python -m mypy src
.venv/bin/python -m pytest
.venv/bin/python -m pip_audit
.venv/bin/python -m compileall run_stopliga.py src testsFor branch-level validation of the in-progress router backends (FRITZ!Box, Keenetic, MikroTik) on real hardware, see docs/router-real-device-test-matrix.md.
- Live block status JSON:
hayahora.futbol/estado/data.json
The sync loop, feed loading, state handling and notifications are shared across all backends.
Current design intent:
- common runtime settings stay in
[app],[feeds]and shared env vars - controller-backed routers reuse
[controller] - backend-specific credentials and behavior stay grouped in their own sections
That keeps Docker setup simple today while making it easier to introduce more router drivers later.