[Bug] ACL with autogroup:self is only pushed after restart #1113

Closed
opened 2025-12-29 02:28:20 +01:00 by adam · 0 comments
Owner

Originally created by @nblock on GitHub (Oct 18, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

A policy with autogroup:self is not immediately pushed to nodes. Several restarts of Headscale are required to make nodes aware of each other and push a working policy.

Policy:

{
  "acls": [
    {
      "action": "accept",
      "src": [
        "autogroup:member"
      ],
      "dst": [
        "autogroup:self:*"
      ]
    }
  ]
}

The tailscale status after joining two nodes for the same user (n1, n2):

root@n1:~# tailscale status
100.64.0.1      n1                   alice        linux   -
100.64.0.2      n2                   alice        linux   -
root@n2:~# tailscale status
100.64.0.2      n2                   alice        linux   -

None of the nodes can ping each other (despite n1 has n2 listed in tailscale status):

root@n1:~# ping -W 5 -c 4 100.64.0.2
PING 100.64.0.2 (100.64.0.2) 56(84) bytes of data.

--- 100.64.0.2 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3053ms
root@n2:~# ping -W 5 -c 4 100.64.0.1
PING 100.64.0.1 (100.64.0.1) 56(84) bytes of data.

--- 100.64.0.1 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3049ms

Probably related trace output:

2025-10-18T10:39:20+02:00 TRC ../runner/work/headscale/headscale/hscontrol/policy/v2/filter.go:50 > resolving destination ips error="autogroup:self requires per-node resolution and cannot be resolved in this context"
2025-10-18T10:39:20+02:00 DBG ../runner/work/headscale/headscale/hscontrol/policy/v2/filter.go:54 > destination resolved to nil ips: {autogroup:self [{0 65535}]}

Expected Behavior

An updated policy is pushed to nodes and nodes assigned to the same user can communicate with each other.

Steps To Reproduce

  1. Setup autogroup:self policy (see above)
  2. Create a user alice
  3. Join node n1 for user alice
  4. Join node n2 for user alice
  5. Try pings between n1 and n2

Restarting the server sometimes fixes the policy and nodes can ping each other. However, subsequent reports are likely to break it again. See attached trace logs for: join, start (ok), start (broken)

Environment

- OS: Debian 13
- Headscale version: 0.27.0-beta.1
- Tailscale version: 1.88.4

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Debug information

Originally created by @nblock on GitHub (Oct 18, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior A policy with `autogroup:self` is not immediately pushed to nodes. Several restarts of Headscale are required to make nodes aware of each other and push a working policy. Policy: ```json { "acls": [ { "action": "accept", "src": [ "autogroup:member" ], "dst": [ "autogroup:self:*" ] } ] } ``` The `tailscale status` after joining two nodes for the same user (n1, n2): ```console root@n1:~# tailscale status 100.64.0.1 n1 alice linux - 100.64.0.2 n2 alice linux - ``` ```console root@n2:~# tailscale status 100.64.0.2 n2 alice linux - ``` None of the nodes can ping each other (despite n1 has n2 listed in `tailscale status`): ```console root@n1:~# ping -W 5 -c 4 100.64.0.2 PING 100.64.0.2 (100.64.0.2) 56(84) bytes of data. --- 100.64.0.2 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3053ms ``` ```console root@n2:~# ping -W 5 -c 4 100.64.0.1 PING 100.64.0.1 (100.64.0.1) 56(84) bytes of data. --- 100.64.0.1 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3049ms ``` Probably related trace output: ``` 2025-10-18T10:39:20+02:00 TRC ../runner/work/headscale/headscale/hscontrol/policy/v2/filter.go:50 > resolving destination ips error="autogroup:self requires per-node resolution and cannot be resolved in this context" 2025-10-18T10:39:20+02:00 DBG ../runner/work/headscale/headscale/hscontrol/policy/v2/filter.go:54 > destination resolved to nil ips: {autogroup:self [{0 65535}]} ``` ### Expected Behavior An updated policy is pushed to nodes and nodes assigned to the same user can communicate with each other. ### Steps To Reproduce 1. Setup `autogroup:self` policy (see above) 1. Create a user `alice` 1. Join node n1 for user `alice` 1. Join node n2 for user `alice` 1. Try pings between n1 and n2 Restarting the server sometimes fixes the policy and nodes can ping each other. However, subsequent reports are likely to break it again. See attached trace logs for: join, start (ok), start (broken) ### Environment ```markdown - OS: Debian 13 - Headscale version: 0.27.0-beta.1 - Tailscale version: 1.88.4 ``` ### Runtime environment - [ ] Headscale is behind a (reverse) proxy - [ ] Headscale runs in a container ### Debug information - [policy.json](https://github.com/user-attachments/files/22983194/policy.json) - [headscale-serve-trace-join.txt](https://github.com/user-attachments/files/22983304/headscale-serve-trace-join.txt) - [headscale-serve-trace-restart-ok.txt](https://github.com/user-attachments/files/22983305/headscale-serve-trace-restart-ok.txt) - [headscale-serve-trace-restart-broken.txt](https://github.com/user-attachments/files/22983307/headscale-serve-trace-restart-broken.txt) - [netmap_n1.json](https://github.com/user-attachments/files/22983197/netmap_n1.json) - [netmap_n2.json](https://github.com/user-attachments/files/22983196/netmap_n2.json)
adam added the bug label 2025-12-29 02:28:20 +01:00
adam closed this issue 2025-12-29 02:28:20 +01:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#1113