[Bug] SSH policy not working after update to 0.26.1 #1058

Closed
opened 2025-12-29 02:28:00 +01:00 by adam · 17 comments
Owner

Originally created by @masterwishx on GitHub (Jul 6, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After updated to 0.26.1 has node issues with connection and ping after a vouple time logout and logins , all fine but ssh not working :

# Health check:
#     - Tailscale SSH enabled, but access controls don't allow anyone to access this device. Update your tailnet's ACLs to allow access.
{
  "groups": {
    "group:admin": ["user_me@"],
    "group:family": ["user1@", "user2@", "user3@"]
  },

  "tagOwners": {
    "tag:cloud-server": ["group:admin"],
    "tag:home-pc": ["group:admin", "group:family"],
    "tag:home-pc-vm": ["group:admin"],
    "tag:home-server": ["group:admin"],
    "tag:home-server-vm": ["group:admin"],
    "tag:home-mobile": ["group:admin", "group:family"],
    "tag:home-mobile-vm": ["group:admin", "group:family"]
  },

      // Home Subnet Route - host IP addresses from 192.168.0.1 to 192.168.1.254
      // '/23' mask to avoid collision with home local lan network https://tailscale.com/kb/1023/troubleshooting#lan-traffic-prioritization-with-overlapping-subnet-routes
  "hosts": {
    "home-lan-network": "192.168.0.0/23"
  },
  
  "acls": [
    {
      // Admin have access to all servers
      "action": "accept",
      "src": ["group:admin"],
      "dst": ["*:*"]
    },

    {
      // Family have access to all home pcs,Speedtest Tracker
      "action": "accept",
      "src": ["group:family"],
      "dst": ["tag:home-pc:*", "tag:home-server:9443", "tag:home-server:8180"]
    },

    {
      "action": "accept",
      "src": ["tag:cloud-server"],
      "dst": ["home-lan-network:*"]
    }

    // We still have to allow internal users communications since nothing guarantees that each user have
    // their own users.
    //{ "action": "accept", "src": ["admin"], "dst": ["admin:*"] },
    //{ "action": "accept", "src": ["family"], "dst": ["family:*"] }
  ],

  "ssh": [
    {
      "action": "accept",
      //"src": ["tag:cloud-server", "tag:home-server", "tag:home-pc"],
      "src": ["group:admin"],
      "dst": ["tag:cloud-server", "tag:home-server"],
      "users": ["root", "ubuntu", "abc"]
    }
  ]
}

Expected Behavior

should work by tags

Steps To Reproduce

updated , reloggined on all nodes

Environment

- OS:
- Headscale version: 0.26.1
- Tailscale version: 1.84.2

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Debug information

after debug found :

 "SSHPolicy": {
        "rules": null
    },
Originally created by @masterwishx on GitHub (Jul 6, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior After updated to 0.26.1 has node issues with connection and ping after a vouple time logout and logins , all fine but ssh not working : ``` # Health check: # - Tailscale SSH enabled, but access controls don't allow anyone to access this device. Update your tailnet's ACLs to allow access. ``` ``` { "groups": { "group:admin": ["user_me@"], "group:family": ["user1@", "user2@", "user3@"] }, "tagOwners": { "tag:cloud-server": ["group:admin"], "tag:home-pc": ["group:admin", "group:family"], "tag:home-pc-vm": ["group:admin"], "tag:home-server": ["group:admin"], "tag:home-server-vm": ["group:admin"], "tag:home-mobile": ["group:admin", "group:family"], "tag:home-mobile-vm": ["group:admin", "group:family"] }, // Home Subnet Route - host IP addresses from 192.168.0.1 to 192.168.1.254 // '/23' mask to avoid collision with home local lan network https://tailscale.com/kb/1023/troubleshooting#lan-traffic-prioritization-with-overlapping-subnet-routes "hosts": { "home-lan-network": "192.168.0.0/23" }, "acls": [ { // Admin have access to all servers "action": "accept", "src": ["group:admin"], "dst": ["*:*"] }, { // Family have access to all home pcs,Speedtest Tracker "action": "accept", "src": ["group:family"], "dst": ["tag:home-pc:*", "tag:home-server:9443", "tag:home-server:8180"] }, { "action": "accept", "src": ["tag:cloud-server"], "dst": ["home-lan-network:*"] } // We still have to allow internal users communications since nothing guarantees that each user have // their own users. //{ "action": "accept", "src": ["admin"], "dst": ["admin:*"] }, //{ "action": "accept", "src": ["family"], "dst": ["family:*"] } ], "ssh": [ { "action": "accept", //"src": ["tag:cloud-server", "tag:home-server", "tag:home-pc"], "src": ["group:admin"], "dst": ["tag:cloud-server", "tag:home-server"], "users": ["root", "ubuntu", "abc"] } ] } ``` ### Expected Behavior should work by tags ### Steps To Reproduce updated , reloggined on all nodes ### Environment ```markdown - OS: - Headscale version: 0.26.1 - Tailscale version: 1.84.2 ``` ### Runtime environment - [x] Headscale is behind a (reverse) proxy - [x] Headscale runs in a container ### Debug information after debug found : ``` "SSHPolicy": { "rules": null }, ```
adam added the bugno-stale-botpolicy 📝SSH labels 2025-12-29 02:28:00 +01:00
adam closed this issue 2025-12-29 02:28:00 +01:00
Author
Owner

@aritas1 commented on GitHub (Jul 8, 2025):

@masterwishx Do your user_me nodes have any tags assigned?
For me, ACLs only work if the node has no (forced) tags assigned. Maybe this applies to the SSH part as well?

@aritas1 commented on GitHub (Jul 8, 2025): @masterwishx Do your user_me nodes have any tags assigned? For me, ACLs only work if the node has no (forced) tags assigned. Maybe this applies to the SSH part as well?
Author
Owner

@masterwishx commented on GitHub (Jul 9, 2025):

@masterwishx Do your user_me nodes have any tags assigned? For me, ACLs only work if the node has no (forced) tags assigned. Maybe this applies to the SSH part as well?

Yep. All nodes have tags.

Forgot to mention using headscale-admin + headplane.

Acl in db Becouse of webui.

Also found was need to enable "override dns" but when enabled loosed connection with nodes.

But when was false also found "tailscale dns status" not contain headscale dns entries.

Somehow to resolved.conf was added search domain from oracle vps that couse issue with dns maybe.

But after node relogin resolved.conf was changed (removed oracle vps search dns), but after headscale container restart it was added again.

@masterwishx commented on GitHub (Jul 9, 2025): > [@masterwishx](https://github.com/masterwishx) Do your user_me nodes have any tags assigned? For me, ACLs only work if the node has no (forced) tags assigned. Maybe this applies to the SSH part as well? Yep. All nodes have tags. Forgot to mention using headscale-admin + headplane. Acl in db Becouse of webui. Also found was need to enable "override dns" but when enabled loosed connection with nodes. But when was false also found "tailscale dns status" not contain headscale dns entries. Somehow to resolved.conf was added search domain from oracle vps that couse issue with dns maybe. But after node relogin resolved.conf was changed (removed oracle vps search dns), but after headscale container restart it was added again.
Author
Owner

@masterwishx commented on GitHub (Jul 9, 2025):

Also have adguard in container as exit node on same machine + headcsale container.

This part was missing in "tailscale dns status"
When oracle vps search added to resolve.conf

Multiple resolvers available:

Anyway after a day of trying to get workout, goes back to 0.25

@masterwishx commented on GitHub (Jul 9, 2025): Also have adguard in container as exit node on same machine + headcsale container. This part was missing in "tailscale dns status" When oracle vps search added to resolve.conf Multiple resolvers available: - https://cloudflare-dns.com/dns-query - 100.64.0.15 //adguard container - 1.1.1.1 - 1.0.0.1 - 2606:4700:4700::1111 - 2606:4700:4700::1001 Anyway after a day of trying to get workout, goes back to 0.25
Author
Owner

@kilogram commented on GitHub (Jul 16, 2025):

I recently migrated from tailscale to headscale, and also encountered this issue. I still have 0.26.1 installed, so I am happy to help debug/be a guinea pig.

(I have for now exluded the ssh policy: acl.json)

@kilogram commented on GitHub (Jul 16, 2025): I recently migrated from tailscale to headscale, and also encountered this issue. I still have 0.26.1 installed, so I am happy to help debug/be a guinea pig. (I have for now exluded the ssh policy: [acl.json](https://github.com/user-attachments/files/21247634/acl.json))
Author
Owner

@masterwishx commented on GitHub (Jul 16, 2025):

I still have 0.26.1 installed, so I am happy to help debug/be a guinea pig.

I hope @nblock or @kradalby will pay attention for it for you can help with testing..

@masterwishx commented on GitHub (Jul 16, 2025): > I still have 0.26.1 installed, so I am happy to help debug/be a guinea pig. I hope @nblock or @kradalby will pay attention for it for you can help with testing..
Author
Owner

@kradalby commented on GitHub (Jul 16, 2025):

It’s appreciated, at the moment I am swamped with some other big work, but I hope to come back to fixing bugs in a few weeks

@kradalby commented on GitHub (Jul 16, 2025): It’s appreciated, at the moment I am swamped with some other big work, but I hope to come back to fixing bugs in a few weeks
Author
Owner

@masterwishx commented on GitHub (Jul 16, 2025):

It’s appreciated, at the moment I am swamped with some other big work, but I hope to come back to fixing bugs in a few weeks

Thanks, I'm just sorry can't help here, Becouse goes back to 0.25. But if needs will try it again.

@masterwishx commented on GitHub (Jul 16, 2025): > It’s appreciated, at the moment I am swamped with some other big work, but I hope to come back to fixing bugs in a few weeks Thanks, I'm just sorry can't help here, Becouse goes back to 0.25. But if needs will try it again.
Author
Owner

@Renerick commented on GitHub (Sep 3, 2025):

I have encountered this issue as well - after adding a node and assigning a tag to it, tailscale status still showed that it had no appropriate ssh access rules, despite having another nodes with the same exact tags working completely fine. I connected to the /debug/ endpoint and found that the new node had no SSH policy assigned to it

  "id:5  hostname:[REDACTED] givenname:[REDACTED]": {
    "rules": null
}

Bizarrely, the fix was trivial: to reboot the headscale process on the server

  "id:5  hostname:[REDACTED] givenname:[REDACTED]": {
    "rules": [
      {
        "principals": [
          {
            "nodeIP": "[REDACTED]"
          }
        ],
        "sshUsers": {
          "[REDACTED]": "="
        },
        "action": {
          "accept": true,
          "allowAgentForwarding": true,
          "allowLocalPortForwarding": true
        }
      }
    ]

and SSH worked fine after that

@Renerick commented on GitHub (Sep 3, 2025): I have encountered this issue as well - after adding a node and assigning a tag to it, tailscale status still showed that it had no appropriate ssh access rules, despite having another nodes with the same exact tags working completely fine. I connected to the `/debug/` endpoint and found that the new node had no SSH policy assigned to it ```json "id:5 hostname:[REDACTED] givenname:[REDACTED]": { "rules": null } ``` Bizarrely, the fix was trivial: to reboot the headscale process on the server ```json "id:5 hostname:[REDACTED] givenname:[REDACTED]": { "rules": [ { "principals": [ { "nodeIP": "[REDACTED]" } ], "sshUsers": { "[REDACTED]": "=" }, "action": { "accept": true, "allowAgentForwarding": true, "allowLocalPortForwarding": true } } ] ``` and SSH worked fine after that
Author
Owner

@masterwishx commented on GitHub (Sep 4, 2025):

and SSH worked fine after that

Interesting I'm also restarted headscale docker but still has issue also using Adguard in docker as exit node for headscale so had issue with resolve.conf in Ubuntu...

Is it better to wait for new 0.27 version maybe for try it?

@masterwishx commented on GitHub (Sep 4, 2025): > and SSH worked fine after that Interesting I'm also restarted headscale docker but still has issue also using Adguard in docker as exit node for headscale so had issue with resolve.conf in Ubuntu... Is it better to wait for new 0.27 version maybe for try it?
Author
Owner

@kradalby commented on GitHub (Sep 10, 2025):

There are known bugs with the tags system which I will work on for 0.28.0. The most prominent issue is that updates for tags are not propagated so headscale has to be restarted.

Outside of that, I would not expect any regressions in the tags system in regards to ssh., I will try to investigate, but if it doesnt yield something, I will push this issue for 0.28.

@kradalby commented on GitHub (Sep 10, 2025): There are known bugs with the `tags` system which I will work on for 0.28.0. The most prominent issue is that updates for tags are not propagated so headscale has to be restarted. Outside of that, I would not expect any regressions in the tags system in regards to ssh., I will try to investigate, but if it doesnt yield something, I will push this issue for 0.28.
Author
Owner

@almereyda commented on GitHub (Sep 23, 2025):

"rules": null can also be observed under the condition of

It appears

for many who don't upgrade often was applied a little too early, like for us coming from v0.23 and jumping straight to v0.26.

Eventually the migration system can hold information about these kinds of multi-step migrations. GitLab and Nextcloud for example deny an upgrade, if the database wasn't migrated to the lowest or other supported versions, yet.

For us a policy could be changed in a way to make it work again by considering:

  • that #2651 yielded duplicate users and new nodes for those and
  • that the policy does not select any of the two when duplicate users are present, eventually related to #2641

leading us to choose a tag-only approach for src and dst, which then worked.

@almereyda commented on GitHub (Sep 23, 2025): `"rules": null` can also be observed under the condition of - https://github.com/juanfont/headscale/issues/2651 It appears - #2411 for many who don't upgrade often was applied a little too early, like for us coming from v0.23 and jumping straight to v0.26. Eventually the migration system can hold information about these kinds of multi-step migrations. GitLab and Nextcloud for example deny an upgrade, if the database wasn't migrated to the lowest or other supported versions, yet. For us a policy could be changed in a way to make it work again by considering: - that #2651 yielded duplicate users and new nodes for those and - that the policy does not select any of the two when duplicate users are present, eventually related to #2641 leading us to choose a tag-only approach for `src` and `dst`, which then worked.
Author
Owner

@masterwishx commented on GitHub (Sep 23, 2025):

leading us to choose a tag-only approach for src and dst, which then worked.

So you mean source group not working in ssh?

    {
      "action": "accept",
      //"src": ["tag:cloud-server", "tag:home-server", "tag:home-pc"],
      "src": ["group:admin"],
      "dst": ["tag:cloud-server", "tag:home-server"],
      "users": ["root", "ubuntu", "abc"]
    }
  ]```
@masterwishx commented on GitHub (Sep 23, 2025): > leading us to choose a tag-only approach for `src` and `dst`, which then worked. So you mean source group not working in ssh? ```"ssh": [ { "action": "accept", //"src": ["tag:cloud-server", "tag:home-server", "tag:home-pc"], "src": ["group:admin"], "dst": ["tag:cloud-server", "tag:home-server"], "users": ["root", "ubuntu", "abc"] } ]```
Author
Owner

@almereyda commented on GitHub (Sep 23, 2025):

Yes, similarly to using a user@ username as source, but possibly related to another regression of an incomplete migration from v0.23 to v0.26. See #2785

The issue has an example of a working, tag-based configuration.

@almereyda commented on GitHub (Sep 23, 2025): Yes, similarly to using a `user@` username as source, but possibly related to another regression of an incomplete migration from v0.23 to v0.26. See #2785 The issue has an example of a working, tag-based configuration.
Author
Owner

@masterwishx commented on GitHub (Sep 23, 2025):

Interesting, I was updated every betas and releases one by one when it was out...

@masterwishx commented on GitHub (Sep 23, 2025): Interesting, I was updated every betas and releases one by one when it was out...
Author
Owner

@kradalby commented on GitHub (Dec 12, 2025):

Changes to separate the tags from users has been merged into main in #2885 and #2931. I will encourage you to help testing this if you are able to build main and run it.

I will close this to track progress, but there might still be bugs and the likes related to this change. As part of hardening this feature, we are tracking all related tags bugs over time in v0.28.0 milestone.

I think that should have resolved this, would be great to have someone help test main.

@kradalby commented on GitHub (Dec 12, 2025): Changes to separate the tags from users has been merged into `main` in #2885 and #2931. I will encourage you to help testing this if you are able to build `main` and run it. I will close this to track progress, but there might still be bugs and the likes related to this change. As part of hardening this feature, we are tracking all related tags bugs over time in [v0.28.0 milestone](https://github.com/juanfont/headscale/milestone/13). I think that should have resolved this, would be great to have someone help test main.
Author
Owner

@masterwishx commented on GitHub (Dec 12, 2025):

Changes to separate the tags from users has been merged into main in #2885 and #2931. I will encourage you to help testing this if you are able to build main and run it.

I will close this to track progress, but there might still be bugs and the likes related to this change. As part of hardening this feature, we are tracking all related tags bugs over time in v0.28.0 milestone.

I think that should have resolved this, would be great to have someone help test main.

I'm currently on 0.25 docker image.
I need to update 0.26.x,0.27.x then I think main don't have docker image to update :(

@masterwishx commented on GitHub (Dec 12, 2025): > Changes to separate the tags from users has been merged into `main` in [#2885](https://github.com/juanfont/headscale/pull/2885) and [#2931](https://github.com/juanfont/headscale/pull/2931). I will encourage you to help testing this if you are able to build `main` and run it. > > I will close this to track progress, but there might still be bugs and the likes related to this change. As part of hardening this feature, we are tracking all related tags bugs over time in [v0.28.0 milestone](https://github.com/juanfont/headscale/milestone/13). > > I think that should have resolved this, would be great to have someone help test main. I'm currently on 0.25 docker image. I need to update 0.26.x,0.27.x then I think main don't have docker image to update :(
Author
Owner

@masterwishx commented on GitHub (Dec 12, 2025):

Maybe you are planning soon to release some package image rc/beta that I can test it?

@masterwishx commented on GitHub (Dec 12, 2025): Maybe you are planning soon to release some package image rc/beta that I can test it?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#1058