[Bug] autoApprove works only after advertising and un-advertising and advertising again #977

Closed
opened 2025-12-29 02:26:59 +01:00 by adam · 1 comment
Owner

Originally created by @eblfo on GitHub (Mar 17, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

hello, sorry to bother with this.

i run tailscale/tailscale:v1.80.3 on a linux instance and headscale/headscale:v0.25.1 as a controller on linux as well.

my current workaround for autoapprove:

  • the tailscale client is started with advertising its routes (--advertise-routes=172.32.0.0/16,172.31.0.0/16,172.23.0.0/16)
  • then stopped.
  • then started with un-advertising the routes (--advertise-routes="").
  • then stopped.
  • then started again with the original advertised routes

the first run the routes list looks like this:

ID | Node | Prefix        | Advertised | Enabled | Primary
7  | ec2  | 172.23.0.0/16 | true       | false   | false  
8  | ec2  | 172.31.0.0/16 | true       | false   | false  
9  | ec2  | 172.32.0.0/16 | true       | false   | false  

after unadvertising:

ID | Node | Prefix        | Advertised | Enabled | Primary
7  | ec2  | 172.23.0.0/16 | false      | false   | false  
8  | ec2  | 172.31.0.0/16 | false      | false   | false  
9  | ec2  | 172.32.0.0/16 | false      | false   | false  

and re-advertising:

ID | Node | Prefix        | Advertised | Enabled | Primary
7  | ec2  | 172.23.0.0/16 | true       | true    | true   
8  | ec2  | 172.31.0.0/16 | true       | true    | true   
9  | ec2  | 172.32.0.0/16 | true       | true    | true   

Expected Behavior

have the routes auto-enabled from start

ID | Node | Prefix        | Advertised | Enabled | Primary
7  | ec2  | 172.23.0.0/16 | true       | true    | true   
8  | ec2  | 172.31.0.0/16 | true       | true    | true   
9  | ec2  | 172.32.0.0/16 | true       | true    | true   

Steps To Reproduce

see above

Environment

- OS:
- Headscale version: v0.25.1 (docker)
  - `docker run -it --rm --network hs --name hs -p 8081:80 -p 8080:8080 -p 9090:9090 -v ~/.tmp_/hs/config:/etc/headscale -v ~/.tmp_/hs/data:/var/lib/headscale --entrypoint headscale headscale/headscale:v0.25.1 serve`
- Tailscale version: v1.80.3 (docker)
  - `docker run -it --rm  --name hs  -v ~/.tmp_/hs/config:/etc/headscale -v ~/.tmp_/hs/data:/var/lib/headscale -e "TS_AUTHKEY=..." -e "TS_EXTRA_ARGS=--login-server https://hsc.name.hidden:8080  --advertise-tags=tag:router --advertise-routes=172.32.0.0/16,172.31.0.0/16,172.23.0.0/16" -e TS_HOSTNAME=ec2 --privileged -e TS_STATE_DIR=/var/lib/tailscale -v ~/.tmp_/hs/state:/var/lib/tailscale -e TS_USERSPACE=false --device /dev/net/tun:/dev/net/tun  tailscale/tailscale:v1.80.3`

headscale is hosted on a vm behind a nat router (home isp router with port forwarding to access headscale from the internet)
tailscale is hosted on a vm inside a vpc with internet gw to access the internet (no incoming sg rule)

config:

server_url: https://hsc.name.hidden:8080

listen_addr: 0.0.0.0:8080              
metrics_listen_addr: 127.0.0.1:9090

tls_letsencrypt_hostname: "hsc.name.hidden" 
tls_letsencrypt_cache_dir: ".cache"
tls_letsencrypt_challenge_type: HTTP-01
tls_letsencrypt_listen: ":80" 

prefixes: 
  v4: 100.64.0.0/10

dns:
  base_domain: hsc.name.hidden  # Replace with your domain
  magic_dns: true
  nameservers: 
    global: 
      - "1.1.1.1"   

noise:
  private_key_path: /var/lib/headscale/noise_private.key

database:
  type: sqlite
  sqlite:
    path: /var/lib/headscale/db.sqlite

derp:
  server:
    enabled: true
    region_id: 999
    region_code: "headscale"
    region_name: "Headscale Embedded DERP"
    stun_listen_addr: "0.0.0.0:3478"
    private_key_path: /var/lib/headscale/derp_server_private.key
    automatically_add_embedded_derp_region: true

oidc:
  issuer: "https://name.hidden"  # URL of your identity provider
  client_id: "0oapon8jbpllqdWSa697"          # Client ID from your identity provider
  client_secret: "...."  # Client Secret from your identity provider
  scope: ["openid", "profile", "email"] # Scopes for user authentication

log:
  format: text       # Options: "text" or "json"
  level: debug       # Options: "info", "warn", "error", "debug", or "trace"

policy:
  mode: file
  path: "/etc/headscale/acl.hujson"

policy

{
  "groups": {
    "group:routers": ["ec2"]
  },
  "tagOwners": {
    "tag:router": ["group:routers"]
  },
  "autoApprovers": {
    "routes": {
      "172.0.0.0/8": [ "tag:router" ],
    }
  },
  "acls": [
    {
      "action": "accept",
      "src": ["*"],
      "dst": ["*:*"]
    }
  ]
}

on the vpc node i run fw rule to allow clients to access routes' remote services
iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Debug information

no debug - route management is not logged - even with trace

the only thing i see differs is that on 3rd run (re-advertising) - not repeated after restarting tailscale a 4th time

2025-03-17T15:12:13Z DBG Expanding alias=tag:router
2025-03-17T15:12:13Z DBG Expanding alias=tag:router
2025-03-17T15:12:13Z DBG Expanding alias=tag:router
Originally created by @eblfo on GitHub (Mar 17, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior hello, sorry to bother with this. i run tailscale/tailscale:v1.80.3 on a linux instance and headscale/headscale:v0.25.1 as a controller on linux as well. my current workaround for autoapprove: * the tailscale client is started with advertising its routes (`--advertise-routes=172.32.0.0/16,172.31.0.0/16,172.23.0.0/16`) * then stopped. * then started with un-advertising the routes (`--advertise-routes=""`). * then stopped. * then started again with the original advertised routes the first run the routes list looks like this: ``` ID | Node | Prefix | Advertised | Enabled | Primary 7 | ec2 | 172.23.0.0/16 | true | false | false 8 | ec2 | 172.31.0.0/16 | true | false | false 9 | ec2 | 172.32.0.0/16 | true | false | false ``` after unadvertising: ``` ID | Node | Prefix | Advertised | Enabled | Primary 7 | ec2 | 172.23.0.0/16 | false | false | false 8 | ec2 | 172.31.0.0/16 | false | false | false 9 | ec2 | 172.32.0.0/16 | false | false | false ``` and re-advertising: ``` ID | Node | Prefix | Advertised | Enabled | Primary 7 | ec2 | 172.23.0.0/16 | true | true | true 8 | ec2 | 172.31.0.0/16 | true | true | true 9 | ec2 | 172.32.0.0/16 | true | true | true ``` ### Expected Behavior have the routes auto-enabled from start ``` ID | Node | Prefix | Advertised | Enabled | Primary 7 | ec2 | 172.23.0.0/16 | true | true | true 8 | ec2 | 172.31.0.0/16 | true | true | true 9 | ec2 | 172.32.0.0/16 | true | true | true ``` ### Steps To Reproduce see above ### Environment ```markdown - OS: - Headscale version: v0.25.1 (docker) - `docker run -it --rm --network hs --name hs -p 8081:80 -p 8080:8080 -p 9090:9090 -v ~/.tmp_/hs/config:/etc/headscale -v ~/.tmp_/hs/data:/var/lib/headscale --entrypoint headscale headscale/headscale:v0.25.1 serve` - Tailscale version: v1.80.3 (docker) - `docker run -it --rm --name hs -v ~/.tmp_/hs/config:/etc/headscale -v ~/.tmp_/hs/data:/var/lib/headscale -e "TS_AUTHKEY=..." -e "TS_EXTRA_ARGS=--login-server https://hsc.name.hidden:8080 --advertise-tags=tag:router --advertise-routes=172.32.0.0/16,172.31.0.0/16,172.23.0.0/16" -e TS_HOSTNAME=ec2 --privileged -e TS_STATE_DIR=/var/lib/tailscale -v ~/.tmp_/hs/state:/var/lib/tailscale -e TS_USERSPACE=false --device /dev/net/tun:/dev/net/tun tailscale/tailscale:v1.80.3` headscale is hosted on a vm behind a nat router (home isp router with port forwarding to access headscale from the internet) tailscale is hosted on a vm inside a vpc with internet gw to access the internet (no incoming sg rule) ``` config: ``` server_url: https://hsc.name.hidden:8080 listen_addr: 0.0.0.0:8080 metrics_listen_addr: 127.0.0.1:9090 tls_letsencrypt_hostname: "hsc.name.hidden" tls_letsencrypt_cache_dir: ".cache" tls_letsencrypt_challenge_type: HTTP-01 tls_letsencrypt_listen: ":80" prefixes: v4: 100.64.0.0/10 dns: base_domain: hsc.name.hidden # Replace with your domain magic_dns: true nameservers: global: - "1.1.1.1" noise: private_key_path: /var/lib/headscale/noise_private.key database: type: sqlite sqlite: path: /var/lib/headscale/db.sqlite derp: server: enabled: true region_id: 999 region_code: "headscale" region_name: "Headscale Embedded DERP" stun_listen_addr: "0.0.0.0:3478" private_key_path: /var/lib/headscale/derp_server_private.key automatically_add_embedded_derp_region: true oidc: issuer: "https://name.hidden" # URL of your identity provider client_id: "0oapon8jbpllqdWSa697" # Client ID from your identity provider client_secret: "...." # Client Secret from your identity provider scope: ["openid", "profile", "email"] # Scopes for user authentication log: format: text # Options: "text" or "json" level: debug # Options: "info", "warn", "error", "debug", or "trace" policy: mode: file path: "/etc/headscale/acl.hujson" ``` policy ``` { "groups": { "group:routers": ["ec2"] }, "tagOwners": { "tag:router": ["group:routers"] }, "autoApprovers": { "routes": { "172.0.0.0/8": [ "tag:router" ], } }, "acls": [ { "action": "accept", "src": ["*"], "dst": ["*:*"] } ] } ``` on the vpc node i run fw rule to allow clients to access routes' remote services `iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE ` ### Runtime environment - [ ] Headscale is behind a (reverse) proxy - [x] Headscale runs in a container ### Debug information no debug - route management is not logged - even with trace the only thing i see differs is that on 3rd run (re-advertising) - not repeated after restarting tailscale a 4th time ``` 2025-03-17T15:12:13Z DBG Expanding alias=tag:router 2025-03-17T15:12:13Z DBG Expanding alias=tag:router 2025-03-17T15:12:13Z DBG Expanding alias=tag:router ```
adam added the bugwell described ❤️ labels 2025-12-29 02:26:59 +01:00
adam closed this issue 2025-12-29 02:26:59 +01:00
Author
Owner

@kradalby commented on GitHub (Mar 27, 2025):

Thank you very much for such a well described issue, made it a lot easier to track down, I've fixed this bug and it will be included in the upcoming 0.26.

@kradalby commented on GitHub (Mar 27, 2025): Thank you very much for such a well described issue, made it a lot easier to track down, I've fixed this bug and it will be included in the upcoming 0.26.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#977