ACLs problem ( it is confuse about my ACLs can not working well) #255

Closed
opened 2025-12-29 01:25:06 +01:00 by adam · 8 comments
Owner

Originally created by @yockrain on GitHub (Apr 11, 2022).

Issue description
Dear all,
I make a Headscale server and three Derp servers,
I created four namespaces, and want to control the nodes by ACLs, e.g. a node only can access another node with same namespaces. But admin can access all nodes.
but I found all nodes can access each other even between different namespaces.
could you give me some advice about how to find the problem? thanks
and another question, which command can check the tag of every nodes in client&server command line window?

To Reproduce

Headscale Server

root@VM-12-11-ubuntu:/etc/headscale# headscale version
v0.15.0

root@VM-12-11-ubuntu:~# headscale namespaces list
ID | Name     | Created
1  | newbeeit | 2022-04-07 09:37:18
2  | mca      | 2022-04-09 09:34:39
3  | hs       | 2022-04-11 06:55:21
4  | admin    | 2022-04-11 09:05:47

root@VM-12-11-ubuntu:~# headscale nodes list
ID | Name     | NodeKey | Namespace | IP addresses                 | Ephemeral | Last seen           | Online | Expired
3  | openwrt  | [Ik5aC] | mca       | 10.10.0.3, fd7a:115c:a1e0::3 | false     | 2022-04-11 13:36:50 | online | no
7  | mkadmin  | [XJYjk] | admin     | 10.10.0.1                    | false     | 2022-04-11 13:36:50 | online | no
8  | nboffice | [U/MWL] | admin     | 10.10.0.2                    | false     | 2022-04-11 13:36:25 | online | no
9  | hsdemo01 | [pd5tJ] | hs        | 10.10.0.4                    | false     | 2022-04-11 13:36:40 | online | no
10 | ncserver | [z05pg] | mca       | 10.10.0.5                    | false     | 2022-04-11 13:36:59 | online | no

root@VM-12-11-ubuntu:~# headscale routes list -i 3
Route         | Enabled
10.20.22.0/24 | true
root@VM-12-11-ubuntu:~# headscale routes list -i 7
Route            | Enabled
192.168.201.0/24 | true
root@VM-12-11-ubuntu:~# headscale routes list -i 8
Route            | Enabled
192.168.202.0/24 | true
root@VM-12-11-ubuntu:~# headscale routes list -i 9
Route | Enabled
root@VM-12-11-ubuntu:~# headscale routes list -i 10
Route | Enabled

root@VM-12-11-ubuntu:/etc/headscale# ll
total 36
drwxr-xr-x   2 root root  4096 Apr 11 17:00  ./
drwxr-xr-x 114 root root 12288 Apr  7 17:21  ../
-rw-r--r--   1 root root  1321 Apr 11 17:24  acls.hujson
-rw-r--r--   1 root root  7070 Apr 11 17:00  config.yaml
-rw-r--r--   1 root root   551 Apr  8 23:01  derp.yaml
-rw-r--r--   1 root root  1764 Apr  7 17:31 's -tulnp|grep headscale'

root@VM-12-11-ubuntu:/etc/headscale# cat config.yaml
******hide*******
# Path to a file containg ACL policies.
# ACLs can be defined as YAML or HUJSON.
# https://tailscale.com/kb/1018/acls/
acl_policy_path:
  - /etc/headscale/acls.hujson
******hide*******

root@VM-12-11-ubuntu:/etc/headscale# cat acls.hujson
{
        //group config, as namespaces list
        "groups": {
                "group:admin": ["admin"],
                "group:newbeeit": ["newbeeit"]"],
                "group:mca": ["mca"],
                "group:hs": ["hs"]
        },

        //tag config
        "tagOwners": {
                //admin tag
                "tag:tadmin": ["group:admin"],
                "tag:tnewbeeit": ["group:admin"],
                "tag:tmca": ["group:admin"],
                "tag:ths": ["group:admin"],
                //newbeeit tag
                "tag:tnewbeeit": ["group:newbeeit"],
                //mca tag
                "tag:tmca": ["group:mca"],
                //hs tag
                "tag:ths": ["group:hs"]
        },
"acls": [
        //admin acls
        {
                "action": "accept",
                "users": ["group:admin"],
                "ports": [
                        "tag:tadmin:*",
                        "tag:tnewbeeit:*",
                        "tag:tmca:*",
                        "tag:ths:*"
                ]
        },

        //newbeeit acls
        {
                "action": "accept",
                "users": ["group:newbeeit"],
                "ports": [
                        "tag:tnewbeeit:*"
                ]
        },

        //mca acls
        {
                "action": "accept",
                "users": ["group:mca"],
                "ports": [
                        "tag:tmca:*"
                ]
        },

        //hs acls
        {
                "action": "accept",
                "users": ["group:hs"],
                "ports": [
                        "tag:ths:*"
                ]
        },

        // all self acls
                { "action": "accept",  "users":["admin"], "ports": [tadmin:*] },
                { "action": "accept",  "users":["newbeeit"], "ports": [tnewbeeit:*] },
                { "action": "accept",  "users":["mca"], "ports": [tmca:*] },
                { "action": "accept",  "users":["hs"], "ports": [ths:*] }
        ]
}

node 3

root@OpenWrt:/etc/init.d# tailscale version
1.22.2
  tailscale commit: 60b6******hide*********b4e
  other commit: ecc******hide**********f994
  go version: go1.17.8-tsdce70b6d32

root@OpenWrt:/etc/init.d# tailscale status
10.10.0.3       openwrt              mca          linux   -
                hsdemo01             hs           windows active; direct 192.168.0.101:41641; offline, tx 43076 rx 46292
                mkadmin              admin        linux   active; relay "webc"; offline, tx 49132 rx 39108
                nboffice             admin        linux   active; relay "webc"; offline, tx 43180 rx 26904
                ncserver             mca          linux   active; relay "webc"; offline, tx 23044 rx 14892
root@OpenWrt:/etc/init.d# ip route show table 52
10.10.0.1 dev tailscale0
10.10.0.2 dev tailscale0
10.10.0.4 dev tailscale0
10.10.0.5 dev tailscale0
100.100.100.100 dev tailscale0
192.168.201.0/24 dev tailscale0
192.168.202.0/24 dev tailscale0

root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.1
pong from mkadmin (10.10.0.1) via DERP(webc) in 65ms
pong from mkadmin (10.10.0.1) via DERP(webc) in 71ms
pong from mkadmin (10.10.0.1) via DERP(webc) in 64ms
pong from mkadmin (10.10.0.1) via DERP(webc) in 59ms

root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.2
pong from nboffice (10.10.0.2) via DERP(webc) in 58ms
pong from nboffice (10.10.0.2) via DERP(webc) in 54ms
pong from nboffice (10.10.0.2) via DERP(webc) in 60ms
pong from nboffice (10.10.0.2) via DERP(webc) in 77ms

root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.4
pong from hsdemo01 (10.10.0.4) via 192.168.0.101:41641 in 0s

root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.5
pong from ncserver (10.10.0.5) via DERP(webc) in 60ms
pong from ncserver (10.10.0.5) via DERP(webc) in 70ms
pong from ncserver (10.10.0.5) via DERP(webc) in 69ms
pong from ncserver (10.10.0.5) via DERP(webc) in 88ms

root@OpenWrt:/etc/init.d# tailscale ping 192.168.201.250
pong from mkadmin (10.10.0.1) via DERP(webc) in 72ms
pong from mkadmin (10.10.0.1) via DERP(webc) in 62ms
pong from mkadmin (10.10.0.1) via DERP(webc) in 66ms

root@OpenWrt:/etc/init.d# tailscale ping 192.168.202.252
pong from nboffice (10.10.0.2) via DERP(webc) in 63ms
pong from nboffice (10.10.0.2) via DERP(webc) in 59ms
pong from nboffice (10.10.0.2) via DERP(webc) in 54ms

node 7

root@sdwan:~# tailscale version
1.22.2
  tailscale commit: 60b67********hide**********4b4e
  other commit: ecc5d9*****hide*********c7ef994
  go version: go1.17.8-tsdce70b6d32

root@sdwan:~# tailscale status
10.10.0.1       mkadmin              admin        linux   -
                hsdemo01             hs           windows active; relay "mic"; offline, tx 62788 rx 65496
                nboffice             admin        linux   active; relay "webc"; offline, tx 1050196 rx 1674640
                ncserver             mca          linux   active; direct 192.168.201.150:41641; offline, tx 58396 rx 1120516
fd7a:115c:a1e0::3 openwrt              mca          linux   active; relay "mic"; offline, tx 54356 rx 49640

root@sdwan:~# ip route show table 52
10.10.0.2 dev tailscale0
10.10.0.3 dev tailscale0
10.10.0.4 dev tailscale0
10.10.0.5 dev tailscale0
10.20.22.0/24 dev tailscale0
100.100.100.100 dev tailscale0
192.168.202.0/24 dev tailscale0

root@sdwan:~# tailscale ping 10.10.0.2
pong from nboffice (10.10.0.2) via DERP(webc) in 33ms
pong from nboffice (10.10.0.2) via DERP(webc) in 21ms
pong from nboffice (10.10.0.2) via DERP(webc) in 21ms

root@sdwan:~# tailscale ping 10.10.0.3
pong from openwrt (10.10.0.3) via DERP(webc) in 75ms
pong from openwrt (10.10.0.3) via DERP(webc) in 62ms
pong from openwrt (10.10.0.3) via DERP(webc) in 62ms

root@sdwan:~# tailscale ping 10.10.0.4
pong from hsdemo01 (10.10.0.4) via DERP(webc) in 62ms
pong from hsdemo01 (10.10.0.4) via DERP(webc) in 66ms
pong from hsdemo01 (10.10.0.4) via DERP(webc) in 54ms

root@sdwan:~# tailscale ping 10.10.0.5
pong from ncserver (10.10.0.5) via 192.168.201.150:41641 in 2ms

root@sdwan:~# tailscale ping 192.168.202.252
pong from nboffice (10.10.0.2) via DERP(webc) in 22ms
pong from nboffice (10.10.0.2) via DERP(webc) in 93ms
pong from nboffice (10.10.0.2) via DERP(webc) in 20ms

root@sdwan:~# tailscale ping 10.20.22.253
pong from openwrt (10.10.0.3) via DERP(mic) in 70ms
pong from openwrt (10.10.0.3) via DERP(webc) in 71ms

and the others nodes are same as node 3 and 7.

Context info

Originally created by @yockrain on GitHub (Apr 11, 2022). <!-- If you have a question, please consider using our Discord for asking questions --> **Issue description** Dear all, I make a Headscale server and three Derp servers, I created four namespaces, and want to control the nodes by ACLs, e.g. a node only can access another node with same namespaces. But admin can access all nodes. but I found all nodes can access each other even between different namespaces. could you give me some advice about how to find the problem? thanks and another question, which command can check the tag of every nodes in client&server command line window? <!-- Please add your issue description. --> **To Reproduce** **Headscale Server** ``` root@VM-12-11-ubuntu:/etc/headscale# headscale version v0.15.0 root@VM-12-11-ubuntu:~# headscale namespaces list ID | Name | Created 1 | newbeeit | 2022-04-07 09:37:18 2 | mca | 2022-04-09 09:34:39 3 | hs | 2022-04-11 06:55:21 4 | admin | 2022-04-11 09:05:47 root@VM-12-11-ubuntu:~# headscale nodes list ID | Name | NodeKey | Namespace | IP addresses | Ephemeral | Last seen | Online | Expired 3 | openwrt | [Ik5aC] | mca | 10.10.0.3, fd7a:115c:a1e0::3 | false | 2022-04-11 13:36:50 | online | no 7 | mkadmin | [XJYjk] | admin | 10.10.0.1 | false | 2022-04-11 13:36:50 | online | no 8 | nboffice | [U/MWL] | admin | 10.10.0.2 | false | 2022-04-11 13:36:25 | online | no 9 | hsdemo01 | [pd5tJ] | hs | 10.10.0.4 | false | 2022-04-11 13:36:40 | online | no 10 | ncserver | [z05pg] | mca | 10.10.0.5 | false | 2022-04-11 13:36:59 | online | no root@VM-12-11-ubuntu:~# headscale routes list -i 3 Route | Enabled 10.20.22.0/24 | true root@VM-12-11-ubuntu:~# headscale routes list -i 7 Route | Enabled 192.168.201.0/24 | true root@VM-12-11-ubuntu:~# headscale routes list -i 8 Route | Enabled 192.168.202.0/24 | true root@VM-12-11-ubuntu:~# headscale routes list -i 9 Route | Enabled root@VM-12-11-ubuntu:~# headscale routes list -i 10 Route | Enabled root@VM-12-11-ubuntu:/etc/headscale# ll total 36 drwxr-xr-x 2 root root 4096 Apr 11 17:00 ./ drwxr-xr-x 114 root root 12288 Apr 7 17:21 ../ -rw-r--r-- 1 root root 1321 Apr 11 17:24 acls.hujson -rw-r--r-- 1 root root 7070 Apr 11 17:00 config.yaml -rw-r--r-- 1 root root 551 Apr 8 23:01 derp.yaml -rw-r--r-- 1 root root 1764 Apr 7 17:31 's -tulnp|grep headscale' root@VM-12-11-ubuntu:/etc/headscale# cat config.yaml ******hide******* # Path to a file containg ACL policies. # ACLs can be defined as YAML or HUJSON. # https://tailscale.com/kb/1018/acls/ acl_policy_path: - /etc/headscale/acls.hujson ******hide******* root@VM-12-11-ubuntu:/etc/headscale# cat acls.hujson { //group config, as namespaces list "groups": { "group:admin": ["admin"], "group:newbeeit": ["newbeeit"]"], "group:mca": ["mca"], "group:hs": ["hs"] }, //tag config "tagOwners": { //admin tag "tag:tadmin": ["group:admin"], "tag:tnewbeeit": ["group:admin"], "tag:tmca": ["group:admin"], "tag:ths": ["group:admin"], //newbeeit tag "tag:tnewbeeit": ["group:newbeeit"], //mca tag "tag:tmca": ["group:mca"], //hs tag "tag:ths": ["group:hs"] }, "acls": [ //admin acls { "action": "accept", "users": ["group:admin"], "ports": [ "tag:tadmin:*", "tag:tnewbeeit:*", "tag:tmca:*", "tag:ths:*" ] }, //newbeeit acls { "action": "accept", "users": ["group:newbeeit"], "ports": [ "tag:tnewbeeit:*" ] }, //mca acls { "action": "accept", "users": ["group:mca"], "ports": [ "tag:tmca:*" ] }, //hs acls { "action": "accept", "users": ["group:hs"], "ports": [ "tag:ths:*" ] }, // all self acls { "action": "accept", "users":["admin"], "ports": [tadmin:*] }, { "action": "accept", "users":["newbeeit"], "ports": [tnewbeeit:*] }, { "action": "accept", "users":["mca"], "ports": [tmca:*] }, { "action": "accept", "users":["hs"], "ports": [ths:*] } ] } ``` node 3 ``` root@OpenWrt:/etc/init.d# tailscale version 1.22.2 tailscale commit: 60b6******hide*********b4e other commit: ecc******hide**********f994 go version: go1.17.8-tsdce70b6d32 root@OpenWrt:/etc/init.d# tailscale status 10.10.0.3 openwrt mca linux - hsdemo01 hs windows active; direct 192.168.0.101:41641; offline, tx 43076 rx 46292 mkadmin admin linux active; relay "webc"; offline, tx 49132 rx 39108 nboffice admin linux active; relay "webc"; offline, tx 43180 rx 26904 ncserver mca linux active; relay "webc"; offline, tx 23044 rx 14892 root@OpenWrt:/etc/init.d# ip route show table 52 10.10.0.1 dev tailscale0 10.10.0.2 dev tailscale0 10.10.0.4 dev tailscale0 10.10.0.5 dev tailscale0 100.100.100.100 dev tailscale0 192.168.201.0/24 dev tailscale0 192.168.202.0/24 dev tailscale0 root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.1 pong from mkadmin (10.10.0.1) via DERP(webc) in 65ms pong from mkadmin (10.10.0.1) via DERP(webc) in 71ms pong from mkadmin (10.10.0.1) via DERP(webc) in 64ms pong from mkadmin (10.10.0.1) via DERP(webc) in 59ms root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.2 pong from nboffice (10.10.0.2) via DERP(webc) in 58ms pong from nboffice (10.10.0.2) via DERP(webc) in 54ms pong from nboffice (10.10.0.2) via DERP(webc) in 60ms pong from nboffice (10.10.0.2) via DERP(webc) in 77ms root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.4 pong from hsdemo01 (10.10.0.4) via 192.168.0.101:41641 in 0s root@OpenWrt:/etc/init.d# tailscale ping 10.10.0.5 pong from ncserver (10.10.0.5) via DERP(webc) in 60ms pong from ncserver (10.10.0.5) via DERP(webc) in 70ms pong from ncserver (10.10.0.5) via DERP(webc) in 69ms pong from ncserver (10.10.0.5) via DERP(webc) in 88ms root@OpenWrt:/etc/init.d# tailscale ping 192.168.201.250 pong from mkadmin (10.10.0.1) via DERP(webc) in 72ms pong from mkadmin (10.10.0.1) via DERP(webc) in 62ms pong from mkadmin (10.10.0.1) via DERP(webc) in 66ms root@OpenWrt:/etc/init.d# tailscale ping 192.168.202.252 pong from nboffice (10.10.0.2) via DERP(webc) in 63ms pong from nboffice (10.10.0.2) via DERP(webc) in 59ms pong from nboffice (10.10.0.2) via DERP(webc) in 54ms ``` node 7 ``` root@sdwan:~# tailscale version 1.22.2 tailscale commit: 60b67********hide**********4b4e other commit: ecc5d9*****hide*********c7ef994 go version: go1.17.8-tsdce70b6d32 root@sdwan:~# tailscale status 10.10.0.1 mkadmin admin linux - hsdemo01 hs windows active; relay "mic"; offline, tx 62788 rx 65496 nboffice admin linux active; relay "webc"; offline, tx 1050196 rx 1674640 ncserver mca linux active; direct 192.168.201.150:41641; offline, tx 58396 rx 1120516 fd7a:115c:a1e0::3 openwrt mca linux active; relay "mic"; offline, tx 54356 rx 49640 root@sdwan:~# ip route show table 52 10.10.0.2 dev tailscale0 10.10.0.3 dev tailscale0 10.10.0.4 dev tailscale0 10.10.0.5 dev tailscale0 10.20.22.0/24 dev tailscale0 100.100.100.100 dev tailscale0 192.168.202.0/24 dev tailscale0 root@sdwan:~# tailscale ping 10.10.0.2 pong from nboffice (10.10.0.2) via DERP(webc) in 33ms pong from nboffice (10.10.0.2) via DERP(webc) in 21ms pong from nboffice (10.10.0.2) via DERP(webc) in 21ms root@sdwan:~# tailscale ping 10.10.0.3 pong from openwrt (10.10.0.3) via DERP(webc) in 75ms pong from openwrt (10.10.0.3) via DERP(webc) in 62ms pong from openwrt (10.10.0.3) via DERP(webc) in 62ms root@sdwan:~# tailscale ping 10.10.0.4 pong from hsdemo01 (10.10.0.4) via DERP(webc) in 62ms pong from hsdemo01 (10.10.0.4) via DERP(webc) in 66ms pong from hsdemo01 (10.10.0.4) via DERP(webc) in 54ms root@sdwan:~# tailscale ping 10.10.0.5 pong from ncserver (10.10.0.5) via 192.168.201.150:41641 in 2ms root@sdwan:~# tailscale ping 192.168.202.252 pong from nboffice (10.10.0.2) via DERP(webc) in 22ms pong from nboffice (10.10.0.2) via DERP(webc) in 93ms pong from nboffice (10.10.0.2) via DERP(webc) in 20ms root@sdwan:~# tailscale ping 10.20.22.253 pong from openwrt (10.10.0.3) via DERP(mic) in 70ms pong from openwrt (10.10.0.3) via DERP(webc) in 71ms ``` and the others nodes are same as node 3 and 7. <!-- Steps to reproduce the behavior. --> **Context info** <!-- Please add relevant information about your system. For example: - Version of headscale used - Version of tailscale client - OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version - Kernel version - The relevant config parameters you used - Log output -->
adam added the bug label 2025-12-29 01:25:06 +01:00
adam closed this issue 2025-12-29 01:25:06 +01:00
Author
Owner

@awsong commented on GitHub (Apr 12, 2022):

I'm seeing the same problem.
According to Tailscale tag documentation , "Once a device has been tagged, it loses the access permissions of the human user who tagged it". So if tagging is successful, "headscale node ls" shouldn't list the node under human user's namespace anymore.

@awsong commented on GitHub (Apr 12, 2022): I'm seeing the same problem. According to [Tailscale tag documentation](https://tailscale.com/kb/1068/acl-tags/#defining-a-tag) , "Once a device has been tagged, it loses the access permissions of the human user who tagged it". So if tagging is successful, "headscale node ls" shouldn't list the node under human user's namespace anymore.
Author
Owner

@awsong commented on GitHub (Apr 12, 2022):

I suspect that tag related functions are not implemented yet because I searched the code base and didn't find tag applying code. @juanfont , could you please give some hint? I'd like to help.

@awsong commented on GitHub (Apr 12, 2022): I suspect that tag related functions are not implemented yet because I searched the code base and didn't find tag applying code. @juanfont , could you please give some hint? I'd like to help.
Author
Owner

@yockrain commented on GitHub (Apr 12, 2022):

@awsong Thank you for you help, Look forward your good news and I hope it will have a good solution in the short time.

@yockrain commented on GitHub (Apr 12, 2022): @awsong Thank you for you help, Look forward your good news and I hope it will have a good solution in the short time.
Author
Owner

@restanrm commented on GitHub (Apr 15, 2022):

I'm seeing the same problem. According to Tailscale tag documentation , "Once a device has been tagged, it loses the access permissions of the human user who tagged it". So if tagging is successful, "headscale node ls" shouldn't list the node under human user's namespace anymore.

That's right nothing is implemented in the presentation side on this, it's a work in progress. But, behind the curtains, the behaviour should be similar to Tailscale. The tagged node should not match the user in the ACL's. There are some unit tests to validate this behaviour.

@restanrm commented on GitHub (Apr 15, 2022): > I'm seeing the same problem. According to [Tailscale tag documentation](https://tailscale.com/kb/1068/acl-tags/#defining-a-tag) , "Once a device has been tagged, it loses the access permissions of the human user who tagged it". So if tagging is successful, "headscale node ls" shouldn't list the node under human user's namespace anymore. That's right nothing is implemented in the presentation side on this, it's a work in progress. But, behind the curtains, the behaviour should be similar to Tailscale. The tagged node should not match the user in the ACL's. There are some unit tests to validate this behaviour.
Author
Owner

@restanrm commented on GitHub (Apr 15, 2022):

With a quick look, it seems you acls.hujson file is invalid. The behaviour of the latest release version is let all traffic go through in this case. On the main branch of headscale, this behaviour has been changed, and if the file has issues, Headscale stop.

What I can see that is wrong is the ports if theself acls section.

Give a look at this document, and keep in mind that Namespaces are Users (we planned to rename namespace to user in next release, but work hasn't been started yet). With OIDC enabled, the Namespaces are the user's email address.

@restanrm commented on GitHub (Apr 15, 2022): With a quick look, it seems you `acls.hujson` file is invalid. The behaviour of the latest release version is let all traffic go through in this case. On the main branch of headscale, this behaviour has been changed, and if the file has issues, Headscale stop. What I can see that is wrong is the `ports` if the`self acls` section. Give a look at [this](https://github.com/juanfont/headscale/blob/main/docs/acls.md) document, and keep in mind that Namespaces are Users (we planned to rename namespace to user in next release, but work hasn't been started yet). With OIDC enabled, the Namespaces are the user's email address.
Author
Owner

@yockrain commented on GitHub (Apr 16, 2022):

@restanrm Thank you for your explain to indicate my mistake of ACLs file. Base on your advice this is the correctional parts of acls.hujson, it is correct or not ?

        // all self acls
                { "action": "accept",  "users":["admin"], "ports": [admin:*] },
                { "action": "accept",  "users":["newbeeit"], "ports": [newbeeit:*] },
                { "action": "accept",  "users":["mca"], "ports": [mca:*] },
                { "action": "accept",  "users":["hs"], "ports": [hs:*] }
@yockrain commented on GitHub (Apr 16, 2022): @restanrm Thank you for your explain to indicate my mistake of ACLs file. Base on your advice this is the correctional parts of acls.hujson, it is correct or not ? ``` // all self acls { "action": "accept", "users":["admin"], "ports": [admin:*] }, { "action": "accept", "users":["newbeeit"], "ports": [newbeeit:*] }, { "action": "accept", "users":["mca"], "ports": [mca:*] }, { "action": "accept", "users":["hs"], "ports": [hs:*] } ```
Author
Owner

@restanrm commented on GitHub (Apr 26, 2022):

@restanrm Thank you for your explain to indicate my mistake of ACLs file. Base on your advice this is the correctional parts of acls.hujson, it is correct or not ?

        // all self acls
                { "action": "accept",  "users":["admin"], "ports": [admin:*] },
                { "action": "accept",  "users":["newbeeit"], "ports": [newbeeit:*] },
                { "action": "accept",  "users":["mca"], "ports": [mca:*] },
                { "action": "accept",  "users":["hs"], "ports": [hs:*] }

Sorry I forgot to reply. Use the following:

{ "action": "accept",  "users":["admin"], "ports": ["admin:*"] },
{ "action": "accept",  "users":["newbeeit"], "ports": ["newbeeit:*"] },
{ "action": "accept",  "users":["mca"], "ports": ["mca:*"] },
{ "action": "accept",  "users":["hs"], "ports": ["hs:*"] },
@restanrm commented on GitHub (Apr 26, 2022): > @restanrm Thank you for your explain to indicate my mistake of ACLs file. Base on your advice this is the correctional parts of acls.hujson, it is correct or not ? > > ``` > // all self acls > { "action": "accept", "users":["admin"], "ports": [admin:*] }, > { "action": "accept", "users":["newbeeit"], "ports": [newbeeit:*] }, > { "action": "accept", "users":["mca"], "ports": [mca:*] }, > { "action": "accept", "users":["hs"], "ports": [hs:*] } > ``` Sorry I forgot to reply. Use the following: ``` { "action": "accept", "users":["admin"], "ports": ["admin:*"] }, { "action": "accept", "users":["newbeeit"], "ports": ["newbeeit:*"] }, { "action": "accept", "users":["mca"], "ports": ["mca:*"] }, { "action": "accept", "users":["hs"], "ports": ["hs:*"] }, ```
Author
Owner

@kradalby commented on GitHub (Jun 12, 2022):

I am going to shelve this for now, if this is still an issue, please reopen.

@kradalby commented on GitHub (Jun 12, 2022): I am going to shelve this for now, if this is still an issue, please reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#255