[Bug] Unsolicited node logout #1120

Closed
opened 2025-12-29 02:28:22 +01:00 by adam · 21 comments
Owner

Originally created by @Haarolean on GitHub (Oct 28, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After an upgrade from 0.27.0-beta.2 to 0.27.0 my node got yoinked out. It was set up in march and worked fine until now.
As you can see, the node is not expired in headscale cli, however it can not connect.

Node info:

178156d4f23618:/# headscale nodes list
ID | Hostname          | Name               | MachineKey | NodeKey | User      | IP addresses                    | Ephemeral | Last seen           | Expiration          | Connected | Expired
70 | vultr-fra         | vultr-fra          | [UAb0V]    | [rcAao] | haarolean | 100.64.0.4, fd7a:115c:a1e0::4   | false     | 2025-10-28 07:51:21 | 0001-01-01 00:00:00 | offline   | no

Key info (unrelated I guess):

178156d4f23618:/# headscale preauthkeys list --user 1
ID | Key                                              | Reusable | Ephemeral | Used  | Expiration          | Created             | Tags
7  | redactedredactedredactedredactedredactedredacted | false    | false     | true  | 2025-03-03 18:20:59 | 2025-03-03 17:20:59 |

Node logs:

2025/10/28 07:49:51 magicsock: derp-28 does not know about peer [FzyBc], removing route
2025/10/28 07:49:57 magicsock: derp-28 does not know about peer [FzyBc], removing route
2025/10/28 07:49:59 [RATELIMIT] format("netstack: could not bind local port %v: %v, trying again with random port") (8 dropped)
2025/10/28 07:49:59 netstack: could not bind local port 51413: listen udp 0.0.0.0:51413: bind: address already in use, trying again with random port
2025/10/28 07:50:02 magicsock: derp-28 does not know about peer [FzyBc], removing route
2025/10/28 07:50:11 magicsock: derp-28 does not know about peer [FzyBc], removing route
2025/10/28 07:50:16 magicsock: derp-20 does not know about peer [NKles], removing route
2025/10/28 07:51:02 magicsock: closing connection to derp-2 (idle), age 1m24s
boot: 2025/10/28 07:51:11 tailscaled exited
boot: 2025/10/28 07:51:11 Starting tailscaled
boot: 2025/10/28 07:51:11 Waiting for tailscaled socket
2025/10/28 07:51:11 logtail started
2025/10/28 07:51:11 Program starting: v1.80.0-t649a71f8a, Go 1.23.5: []string{"tailscaled", "--socket=/tmp/tailscaled.sock", "--statedir=/state", "--tun=userspace-networking", "--socks5-server=:1055"}
2025/10/28 07:51:11 LogID: ae55adf1830f573c438eaf75d2d93c0dd40e53c67c15a0e35ce56af7fd767552
2025/10/28 07:51:11 logpolicy: using system state directory "/var/lib/tailscale"
2025/10/28 07:51:11 dns: [rc=unknown ret=direct]
2025/10/28 07:51:11 dns: using "direct" mode
2025/10/28 07:51:11 dns: using *dns.directManager
2025/10/28 07:51:11 dns: inotify addwatch: context canceled
2025/10/28 07:51:11 wgengine.NewUserspaceEngine(tun "userspace-networking") ...
2025/10/28 07:51:11 dns: using dns.noopManager
2025/10/28 07:51:11 link state: interfaces.State{defaultRoute=eth0 ifs={eth0:[172.18.0.2/16]} v4=true v6=false}
2025/10/28 07:51:11 onPortUpdate(port=55290, network=udp6)
2025/10/28 07:51:11 onPortUpdate(port=35051, network=udp4)
2025/10/28 07:51:11 magicsock: disco key = d:a78dab3803783aae
2025/10/28 07:51:11 Creating WireGuard device...
2025/10/28 07:51:11 Bringing WireGuard device up...
2025/10/28 07:51:11 Bringing router up...
2025/10/28 07:51:11 Clearing router settings...
2025/10/28 07:51:11 Starting network monitor...
2025/10/28 07:51:11 Engine created.
2025/10/28 07:51:11 pm: using backend prefs for "profile-24aa": Prefs{ra=false dns=false want=true routes=[0.0.0.0/0 ::/0] snat=true statefulFiltering=false tags=tag:exit-node nf=on url="https://REDACTED/"
host="vultr-fra" update=check Persist{o=, n=[rcAao] u="redacted-user"}}
2025/10/28 07:51:11 logpolicy: using system state directory "/var/lib/tailscale"
2025/10/28 07:51:11 got LocalBackend in 12ms
2025/10/28 07:51:11 Start
2025/10/28 07:51:11 Backend: logs: be:ae55adf1830f573c438eaf75d2d93c0dd40e53c67c15a0e35ce56af7fd767552 fe:
2025/10/28 07:51:11 control: client.Login(0)
2025/10/28 07:51:11 control: doLogin(regen=false, hasUrl=false)
2025/10/28 07:51:11 health(warnable=warming-up): error: Tailscale is starting. Please wait.
boot: 2025/10/28 07:51:12 [warning] failed to symlink socket: file exists
        To interact with the Tailscale CLI please use `tailscale --socket="/tmp/tailscaled.sock"`
boot: 2025/10/28 07:51:12 Running 'tailscale up'
2025/10/28 07:51:12 Start
2025/10/28 07:51:12 control: control server key from https://REDACTED: ts2021=[zlqQn], legacy=
2025/10/28 07:51:12 control: RegisterReq: onode= node=[rcAao] fup=false nks=false
2025/10/28 07:51:12 Backend: logs: be:ae55adf1830f573c438eaf75d2d93c0dd40e53c67c15a0e35ce56af7fd767552 fe:
2025/10/28 07:51:12 control: client.Login(0)
2025/10/28 07:51:12 control: client.Shutdown ...
2025/10/28 07:51:12 control: mapRoutine: exiting
2025/10/28 07:51:12 control: doLogin(regen=false, hasUrl=false)
2025/10/28 07:51:12 control: authRoutine: exiting
2025/10/28 07:51:12 control: updateRoutine: exiting
2025/10/28 07:51:12 control: Client.Shutdown done.
2025/10/28 07:51:12 health(warnable=login-state): error: You are logged out.
2025/10/28 07:51:12 control: control server key from https://REDACTED: ts2021=[zlqQn], legacy=
2025/10/28 07:51:12 control: RegisterReq: onode= node=[rcAao] fup=false nks=false
2025/10/28 07:51:12 control: controlhttp: forcing port 443 dial due to recent noise dial
2025/10/28 07:51:12 control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=false; authURL=false
2025/10/28 07:51:12 Received error: authkey expired
backend error: authkey expired
2025/10/28 07:51:12 health(warnable=login-state): error: You are logged out. The last login error was: authkey expired
boot: 2025/10/28 07:51:12 Sending SIGTERM to tailscaled
2025/10/28 07:51:12 tailscaled got signal terminated; shutting down

Headscale logs don't show anything suspicious, I got a few "Cleaning up node that has been offline for too long" for the other nodes, but these are a bit weird too — these nodes have been online today

Expected Behavior

expected it to not happen

Steps To Reproduce

no idea

Environment

- OS: alpine
- Headscale version: 0.27.0
- Tailscale version: tailscale/tailscale:latest, image id `1fdb8c5f78a3`

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Debug information

provided in "current behavior"

Originally created by @Haarolean on GitHub (Oct 28, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior After an upgrade from `0.27.0-beta.2` to `0.27.0` my node got yoinked out. It was set up in march and worked fine until now. As you can see, the node is not expired in headscale cli, however it can not connect. Node info: ``` 178156d4f23618:/# headscale nodes list ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired 70 | vultr-fra | vultr-fra | [UAb0V] | [rcAao] | haarolean | 100.64.0.4, fd7a:115c:a1e0::4 | false | 2025-10-28 07:51:21 | 0001-01-01 00:00:00 | offline | no ``` Key info (unrelated I guess): ``` 178156d4f23618:/# headscale preauthkeys list --user 1 ID | Key | Reusable | Ephemeral | Used | Expiration | Created | Tags 7 | redactedredactedredactedredactedredactedredacted | false | false | true | 2025-03-03 18:20:59 | 2025-03-03 17:20:59 | ``` Node logs: ``` 2025/10/28 07:49:51 magicsock: derp-28 does not know about peer [FzyBc], removing route 2025/10/28 07:49:57 magicsock: derp-28 does not know about peer [FzyBc], removing route 2025/10/28 07:49:59 [RATELIMIT] format("netstack: could not bind local port %v: %v, trying again with random port") (8 dropped) 2025/10/28 07:49:59 netstack: could not bind local port 51413: listen udp 0.0.0.0:51413: bind: address already in use, trying again with random port 2025/10/28 07:50:02 magicsock: derp-28 does not know about peer [FzyBc], removing route 2025/10/28 07:50:11 magicsock: derp-28 does not know about peer [FzyBc], removing route 2025/10/28 07:50:16 magicsock: derp-20 does not know about peer [NKles], removing route 2025/10/28 07:51:02 magicsock: closing connection to derp-2 (idle), age 1m24s boot: 2025/10/28 07:51:11 tailscaled exited boot: 2025/10/28 07:51:11 Starting tailscaled boot: 2025/10/28 07:51:11 Waiting for tailscaled socket 2025/10/28 07:51:11 logtail started 2025/10/28 07:51:11 Program starting: v1.80.0-t649a71f8a, Go 1.23.5: []string{"tailscaled", "--socket=/tmp/tailscaled.sock", "--statedir=/state", "--tun=userspace-networking", "--socks5-server=:1055"} 2025/10/28 07:51:11 LogID: ae55adf1830f573c438eaf75d2d93c0dd40e53c67c15a0e35ce56af7fd767552 2025/10/28 07:51:11 logpolicy: using system state directory "/var/lib/tailscale" 2025/10/28 07:51:11 dns: [rc=unknown ret=direct] 2025/10/28 07:51:11 dns: using "direct" mode 2025/10/28 07:51:11 dns: using *dns.directManager 2025/10/28 07:51:11 dns: inotify addwatch: context canceled 2025/10/28 07:51:11 wgengine.NewUserspaceEngine(tun "userspace-networking") ... 2025/10/28 07:51:11 dns: using dns.noopManager 2025/10/28 07:51:11 link state: interfaces.State{defaultRoute=eth0 ifs={eth0:[172.18.0.2/16]} v4=true v6=false} 2025/10/28 07:51:11 onPortUpdate(port=55290, network=udp6) 2025/10/28 07:51:11 onPortUpdate(port=35051, network=udp4) 2025/10/28 07:51:11 magicsock: disco key = d:a78dab3803783aae 2025/10/28 07:51:11 Creating WireGuard device... 2025/10/28 07:51:11 Bringing WireGuard device up... 2025/10/28 07:51:11 Bringing router up... 2025/10/28 07:51:11 Clearing router settings... 2025/10/28 07:51:11 Starting network monitor... 2025/10/28 07:51:11 Engine created. 2025/10/28 07:51:11 pm: using backend prefs for "profile-24aa": Prefs{ra=false dns=false want=true routes=[0.0.0.0/0 ::/0] snat=true statefulFiltering=false tags=tag:exit-node nf=on url="https://REDACTED/" host="vultr-fra" update=check Persist{o=, n=[rcAao] u="redacted-user"}} 2025/10/28 07:51:11 logpolicy: using system state directory "/var/lib/tailscale" 2025/10/28 07:51:11 got LocalBackend in 12ms 2025/10/28 07:51:11 Start 2025/10/28 07:51:11 Backend: logs: be:ae55adf1830f573c438eaf75d2d93c0dd40e53c67c15a0e35ce56af7fd767552 fe: 2025/10/28 07:51:11 control: client.Login(0) 2025/10/28 07:51:11 control: doLogin(regen=false, hasUrl=false) 2025/10/28 07:51:11 health(warnable=warming-up): error: Tailscale is starting. Please wait. boot: 2025/10/28 07:51:12 [warning] failed to symlink socket: file exists To interact with the Tailscale CLI please use `tailscale --socket="/tmp/tailscaled.sock"` boot: 2025/10/28 07:51:12 Running 'tailscale up' 2025/10/28 07:51:12 Start 2025/10/28 07:51:12 control: control server key from https://REDACTED: ts2021=[zlqQn], legacy= 2025/10/28 07:51:12 control: RegisterReq: onode= node=[rcAao] fup=false nks=false 2025/10/28 07:51:12 Backend: logs: be:ae55adf1830f573c438eaf75d2d93c0dd40e53c67c15a0e35ce56af7fd767552 fe: 2025/10/28 07:51:12 control: client.Login(0) 2025/10/28 07:51:12 control: client.Shutdown ... 2025/10/28 07:51:12 control: mapRoutine: exiting 2025/10/28 07:51:12 control: doLogin(regen=false, hasUrl=false) 2025/10/28 07:51:12 control: authRoutine: exiting 2025/10/28 07:51:12 control: updateRoutine: exiting 2025/10/28 07:51:12 control: Client.Shutdown done. 2025/10/28 07:51:12 health(warnable=login-state): error: You are logged out. 2025/10/28 07:51:12 control: control server key from https://REDACTED: ts2021=[zlqQn], legacy= 2025/10/28 07:51:12 control: RegisterReq: onode= node=[rcAao] fup=false nks=false 2025/10/28 07:51:12 control: controlhttp: forcing port 443 dial due to recent noise dial 2025/10/28 07:51:12 control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=false; authURL=false 2025/10/28 07:51:12 Received error: authkey expired backend error: authkey expired 2025/10/28 07:51:12 health(warnable=login-state): error: You are logged out. The last login error was: authkey expired boot: 2025/10/28 07:51:12 Sending SIGTERM to tailscaled 2025/10/28 07:51:12 tailscaled got signal terminated; shutting down ``` Headscale logs don't show anything suspicious, I got a few "Cleaning up node that has been offline for too long" for the *other* nodes, but these are a bit weird too — these nodes have been online *today* ### Expected Behavior expected it to not happen ### Steps To Reproduce no idea ### Environment ```markdown - OS: alpine - Headscale version: 0.27.0 - Tailscale version: tailscale/tailscale:latest, image id `1fdb8c5f78a3` ``` ### Runtime environment - [x] Headscale is behind a (reverse) proxy - [x] Headscale runs in a container ### Debug information provided in "current behavior"
adam added the bug label 2025-12-29 02:28:22 +01:00
adam closed this issue 2025-12-29 02:28:23 +01:00
Author
Owner

@Haarolean commented on GitHub (Oct 28, 2025):

Tried to downgrade, reauth the node and upgrade again — couldn't reproduce this, the node stays online.

@Haarolean commented on GitHub (Oct 28, 2025): Tried to downgrade, reauth the node and upgrade again — couldn't reproduce this, the node stays online.
Author
Owner

@Nathanael-Mtd commented on GitHub (Oct 28, 2025):

Hello, I think I got the same issue on 0.27.0, with Tailscale on container (Docker/Podman), when I rollback to 0.26.x it works fine.

If the connected node was online during upgrade from 0.26.0 to 0.27.0, it keeps connected, but after a restart, auth fails.
If I generate new pre-authkey (not re-usable), login work, but after a restart, I got the same issue.

@Haarolean Can you check if you can reproduce the issue like how I've done ?

@Nathanael-Mtd commented on GitHub (Oct 28, 2025): Hello, I think I got the same issue on 0.27.0, with Tailscale on container (Docker/Podman), when I rollback to 0.26.x it works fine. If the connected node was online during upgrade from 0.26.0 to 0.27.0, it keeps connected, but after a restart, auth fails. If I generate new pre-authkey (not re-usable), login work, but after a restart, I got the same issue. @Haarolean Can you check if you can reproduce the issue like how I've done ?
Author
Owner

@Haarolean commented on GitHub (Oct 28, 2025):

Can confirm, restarting the node results in it being logged out, again.

@Haarolean commented on GitHub (Oct 28, 2025): Can confirm, restarting the node results in it being logged out, again.
Author
Owner

@aalmenar commented on GitHub (Nov 5, 2025):

Exactly what @Nathanael-Mtd reported here happened to me also.

Maybe its related to the in-memory change that also affects OIDC session expiration as reported here.

@aalmenar commented on GitHub (Nov 5, 2025): Exactly what @Nathanael-Mtd reported [here](https://github.com/juanfont/headscale/issues/2830#issuecomment-3456979078) happened to me also. Maybe its related to the in-memory change that also affects OIDC session expiration as reported [here](https://github.com/juanfont/headscale/issues/2862).
Author
Owner

@kradalby commented on GitHub (Nov 5, 2025):

Can you guys try https://github.com/juanfont/headscale/pull/2859?

@kradalby commented on GitHub (Nov 5, 2025): Can you guys try https://github.com/juanfont/headscale/pull/2859?
Author
Owner

@jerkovicl commented on GitHub (Nov 12, 2025):

i still see this error in logs even with latest version

DEBUG: Received error: handling register with auth key: AuthKey not found
2025-11-12T14:03:21.162Z [agent] DEBUG: Failed to connect to Tailnet: tsnet.Up: backend: handling register with auth key: AuthKey not found
2025-11-12T14:03:21.165Z [agent] WARN: Headplane agent exited with code 1 and signal undefined
2025-11-12T14:03:21.165Z [agent] WARN: Headplane agent will restart in 305.972 seconds (attempt 40)

EDIT:
I am using docker setup on ubuntu with keycloak as oidc for headscale and headplane with tailscale container as exit node.
Everything is working except agent integration since it throws the error mentioned above on startup
@kradalby do you need headplane config?

Docker compose :

headscale:
    image: "headscale/headscale:latest"
    container_name: "headscale"
    pull_policy: always
    restart: unless-stopped
    command: serve
    healthcheck:
        test: ["CMD", "headscale", "health"]
    dns:
      - 127.0.0.1
      # Sets a backup server of your choosing in case DNSMasq has problems starting
      - 1.1.1.1
    volumes:
      - ${USERDIR}/docker/headscale/config/config.yaml:/etc/headscale/config.yaml
      - ${USERDIR}/docker/headscale/config/extra_records.json:/var/lib/headscale/extra_records.json
      - ${USERDIR}/docker/headscale:/var/lib/headscale
      - /etc/localtime:/etc/localtime:ro
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - HEADSCALE_OIDC_CLIENT_ID=${AUTH_CLIENT_ID}
      - HEADSCALE_OIDC_CLIENT_SECRET=${AUTH_CLIENT_SECRET}
      - HEADSCALE_OIDC_ISSUER=https://xxxxxx.${DOMAINNAME}/auth/realms/master
    networks:
      - traefik_proxy
    labels:
      # This label is absolutely necessary to help Headplane find Headscale.
      - "me.tale.headplane.target=headscale"
      - "traefik.enable=true"
      - "traefik.backend=headscale"
      - "traefik.frontend.rule=Host:xxxx.${DOMAINNAME}"
      - "traefik.port=8080"
      - "traefik.docker.network=traefik_proxy"


  headplane:
    image: "ghcr.io/tale/headplane:latest"
    container_name: "headplane"
    pull_policy: always
    restart: unless-stopped
    volumes:
      - ${USERDIR}/docker/headplane/config/config.yaml:/etc/headplane/config.yaml
      - ${USERDIR}/docker/headplane:/var/lib/headplane
      - ${USERDIR}/docker/headscale/config/config.yaml:/etc/headscale/config.yaml
      - ${USERDIR}/docker/headscale/config/extra_records.json:/var/lib/headscale/extra_records.json
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - HEADPLANE_LOAD_ENV_OVERRIDES=true
      - HEADPLANE_DEBUG_LOG=true
      - HEADPLANE_SERVER__COOKIE_SECRET=${HEADPLANE_COOKIE_SECRET}
      - HEADPLANE_HEADSCALE__PUBLIC_URL=https://xxxx.${DOMAINNAME}
      - HEADPLANE_INTEGRATION__AGENT_PRE_AUTHKEY=${HEADSCALE_PRE_AUTH_KEY}
      - HEADPLANE_OIDC__HEADSCALE_API_KEY=${HEADSCALE_API_KEY}
      - HEADPLANE_OIDC__ISSUER=https://xxxx.${DOMAINNAME}/auth/realms/master
      - HEADPLANE_OIDC__CLIENT_ID=${AUTH_CLIENT_ID}
      - HEADPLANE_OIDC__CLIENT_SECRET=${AUTH_CLIENT_SECRET}
      - HEADPLANE_OIDC__REDIRECT_URI=https://xxxx.${DOMAINNAME}/admin/oidc/callback
    networks:
      - traefik_proxy
    labels:
      - "traefik.enable=true"
      - "traefik.backend=headplane"
      - "traefik.frontend.rule=Host:xxxx.${DOMAINNAME}"
      - "traefik.port=3000"
      - "traefik.docker.network=traefik_proxy"

# Tailscale Client - Operating as Tailnet Exit-Node
  tailscale:
    image: tailscale/tailscale:latest
    hostname: tailscale
    container_name: tailscale
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:9002/healthz"]
      interval: 1m
      timeout: 10s
      retries: 3
      start_period: 10s
    networks:
      - traefik_proxy
    cap_add:
      - net_admin
    sysctls: 
      - net.ipv4.ip_forward=1 
      - net.ipv6.conf.all.forwarding=1
    devices:
      - /dev/net/tun:/dev/net/tun
    volumes:
      - ${USERDIR}/docker/tailscale:/var/lib/tailscale
    environment:
      - TS_ENABLE_HEALTH_CHECK=true
      - TS_ACCEPT_DNS=true
      - TS_DEBUG_FIREWALL_MODE=auto
      - TS_USERSPACE=false
      - TS_STATE_DIR=/var/lib/tailscale
      - TS_AUTHKEY=${TAILSCALE_AUTH_KEY:?err}
      - TS_EXTRA_ARGS=--hostname=exit-node --advertise-exit-node --advertise-routes=${LOCAL_SUBNET:?err},${DOCKER_SUBNET:?err} --login-server=https://xxxxx.${DOMAINNAME:?err}

Headscale config:

---
# headscale will look for a configuration file named `config.yaml` (or `config.json`) in the following order:
#
# - `/etc/headscale`
# - `~/.headscale`
# - current working directory

# The url clients will connect to.
# Typically this will be a domain like:
#
# https://myheadscale.example.com:443
#
server_url: https://xxx.example.com

# Address to listen to / bind to on the server
#
# For production:
#listen_addr: 127.0.0.1:8080
listen_addr: 0.0.0.0:8080

# Address to listen to /metrics and /debug, you may want
# to keep this endpoint private to your internal network
metrics_listen_addr: 127.0.0.1:9090

# Address to listen for gRPC.
# gRPC is used for controlling a headscale server
# remotely with the CLI
# Note: Remote access _only_ works if you have
# valid certificates.
#
# For production:
#grpc_listen_addr: 127.0.0.1:50443
grpc_listen_addr: 0.0.0.0:50443

# Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will
# be unencrypted. Only enable if you know what you
# are doing.
grpc_allow_insecure: false

# The Noise section includes specific configuration for the
# TS2021 Noise protocol
noise:
  # The Noise private key is used to encrypt the traffic between headscale and
  # Tailscale clients when using the new Noise-based protocol. A missing key
  # will be automatically generated.
  private_key_path: /var/lib/headscale/noise_private.key

# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
# It must be within IP ranges supported by the Tailscale
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
# See below:
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# Any other range is NOT supported, and it will cause unexpected issues.
prefixes:
  v4: 100.64.0.0/10
  v6: fd7a:115c:a1e0::/48

  # Strategy used for allocation of IPs to nodes, available options:
  # - sequential (default): assigns the next free IP from the previous given
  #   IP. A best-effort approach is used and Headscale might leave holes in the
  #   IP range or fill up existing holes in the IP range.
  # - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
  allocation: sequential

# DERP is a relay system that Tailscale uses when a direct
# connection cannot be established.
# https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp
#
# headscale needs a list of DERP servers that can be presented
# to the clients.
derp:
  server:
    # If enabled, runs the embedded DERP server and merges it into the rest of the DERP config
    # The Headscale server_url defined above MUST be using https, DERP requires TLS to be in place
    enabled: false

    # Region ID to use for the embedded DERP server.
    # The local DERP prevails if the region ID collides with other region ID coming from
    # the regular DERP config.
    region_id: 999

    # Region code and name are displayed in the Tailscale UI to identify a DERP region
    region_code: "headscale"
    region_name: "Headscale Embedded DERP"

    # Only allow clients associated with this server access
    verify_clients: true

    # Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
    # When the embedded DERP server is enabled stun_listen_addr MUST be defined.
    #
    # For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
    stun_listen_addr: "0.0.0.0:3478"

    # Private key used to encrypt the traffic between headscale DERP and
    # Tailscale clients. A missing key will be automatically generated.
    private_key_path: /var/lib/headscale/derp_server_private.key

    # This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
    # it enables the creation of your very own DERP map entry using a locally available file with the parameter DERP.paths
    # If you enable the DERP server and set this to false, it is required to add the DERP server to the DERP map using DERP.paths
    automatically_add_embedded_derp_region: true

    # For better connection stability (especially when using an Exit-Node and DNS is not working),
    # it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
    ipv4: 198.51.100.1
    ipv6: 2001:db8::1

  # List of externally available DERP maps encoded in JSON
  urls:
    - https://controlplane.tailscale.com/derpmap/default

  # Locally available DERP map files encoded in YAML
  #
  # This option is mostly interesting for people hosting
  # their own DERP servers:
  # https://tailscale.com/kb/1118/custom-derp-servers/
  #
  # paths:
  #   - /etc/headscale/derp-example.yaml
  paths: []

  # If enabled, a worker will be set up to periodically
  # refresh the given sources and update the derpmap
  # will be set up.
  auto_update_enabled: true

  # How often should we check for DERP updates?
  update_frequency: 24h

# Disables the automatic check for headscale updates on startup
disable_check_updates: false

# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m

database:
  # Database type. Available options: sqlite, postgres
  # Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
  # All new development, testing and optimisations are done with SQLite in mind.
  type: sqlite

  # Enable debug mode. This setting requires the log.level to be set to "debug" or "trace".
  debug: false

  # GORM configuration settings.
  gorm:
    # Enable prepared statements.
    prepare_stmt: true

    # Enable parameterized queries.
    parameterized_queries: true

    # Skip logging "record not found" errors.
    skip_err_record_not_found: true

    # Threshold for slow queries in milliseconds.
    slow_threshold: 1000

  # SQLite config
  sqlite:
    path: /var/lib/headscale/db.sqlite

    # Enable WAL mode for SQLite. This is recommended for production environments.
    # https://www.sqlite.org/wal.html
    write_ahead_log: true

    # Maximum number of WAL file frames before the WAL file is automatically checkpointed.
    # https://www.sqlite.org/c3ref/wal_autocheckpoint.html
    # Set to 0 to disable automatic checkpointing.
    wal_autocheckpoint: 1000

  # # Postgres config
  # Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
  # See database.type for more information.
  # postgres:
  #   # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
  #   host: localhost
  #   port: 5432
  #   name: headscale
  #   user: foo
  #   pass: bar
  #   max_open_conns: 10
  #   max_idle_conns: 10
  #   conn_max_idle_time_secs: 3600

  #   # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
  #   # in the 'ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
  #   ssl: false

### TLS configuration
#
## Let's encrypt / ACME
#
# headscale supports automatically requesting and setting up
# TLS for a domain with Let's Encrypt.
#
# URL to ACME directory
acme_url: https://acme-v02.api.letsencrypt.org/directory

# Email to register with ACME provider
acme_email: ""

# Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: ""

# Path to store certificates and metadata needed by
# letsencrypt
# For production:
tls_letsencrypt_cache_dir: /var/lib/headscale/cache

# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
# See: docs/ref/tls.md for more information
tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listening on:
# :http = port 80
tls_letsencrypt_listen: ":http"

## Use already defined certificates:
tls_cert_path: ""
tls_key_path: ""

log:
  # Valid log levels: panic, fatal, error, warn, info, debug, trace
  level: info

  # Output formatting for logs: text or json
  format: text

## Policy
# headscale supports Tailscale's ACL policies.
# Please have a look to their KB to better
# understand the concepts: https://tailscale.com/kb/1018/acls/
policy:
  # The mode can be "file" or "database" that defines
  # where the ACL policies are stored and read from.
  mode: database
  # If the mode is set to "file", the path to a
  # HuJSON file containing ACL policies.
  path: ""

## DNS
#
# headscale supports Tailscale's DNS configuration and MagicDNS.
# Please have a look to their KB to better understand the concepts:
#
# - https://tailscale.com/kb/1054/dns/
# - https://tailscale.com/kb/1081/magicdns/
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
#
# Please note that for the DNS configuration to have any effect,
# clients must have the `--accept-dns=true` option enabled. This is the
# default for the Tailscale client. This option is enabled by default
# in the Tailscale client.
#
# Setting _any_ of the configuration and `--accept-dns=true` on the
# clients will integrate with the DNS manager on the client or
# overwrite /etc/resolv.conf.
# https://tailscale.com/kb/1235/resolv-conf
#
# If you want stop Headscale from managing the DNS configuration
# all the fields under `dns` should be set to empty values.
dns:
  # Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
  magic_dns: true

  # Defines the base domain to create the hostnames for MagicDNS.
  # This domain _must_ be different from the server_url domain.
  # `base_domain` must be a FQDN, without the trailing dot.
  # The FQDN of the hosts will be
  # `hostname.base_domain` (e.g., _myhost.example.com_).
  base_domain: tailnet

  # Whether to use the local DNS settings of a node or override the local DNS
  # settings (default) and force the use of Headscale's DNS configuration.
  override_local_dns: true

  # List of DNS servers to expose to clients.
  nameservers:
    global:
      - 1.1.1.1
      - 1.0.0.1
      - 2606:4700:4700::1111
      - 2606:4700:4700::1001

      # NextDNS (see https://tailscale.com/kb/1218/nextdns/).
      # "abc123" is example NextDNS ID, replace with yours.
      # - https://dns.nextdns.io/abc123

    # Split DNS (see https://tailscale.com/kb/1054/dns/),
    # a map of domains and which DNS server to use for each.
    split: {}
      # foo.bar.com:
      #   - 1.1.1.1
      # darp.headscale.net:
      #   - 1.1.1.1
      #   - 8.8.8.8

  # Set custom DNS search domains. With MagicDNS enabled,
  # your tailnet base_domain is always the first search domain.
  search_domains: []

  # Extra DNS records
  # so far only A and AAAA records are supported (on the tailscale side)
  # See: docs/ref/dns.md
  #extra_records: []
  #   - name: "grafana.myvpn.example.com"
  #     type: "A"
  #     value: "100.64.0.3"
  #
  #   # you can also put it in one line
  #   - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
  #
  # Alternatively, extra DNS records can be loaded from a JSON file.
  # Headscale processes this file on each change.
  extra_records_path: /var/lib/headscale/extra_records.json

# Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like:
unix_socket: /var/run/headscale/headscale.sock
unix_socket_permission: "0770"

# OpenID Connect
oidc:
#   # Block startup until the identity provider is available and healthy.
#   only_start_if_oidc_is_available: true
#
#   # OpenID Connect Issuer URL from the identity provider
   issuer: "https://your-oidc.issuer.com/path"
#
#   # Client ID from the identity provider
   client_id: "your-oidc-client-id"
#
#   # Client secret generated by the identity provider
#   # Note: client_secret and client_secret_path are mutually exclusive.
   client_secret: "your-oidc-client-secret"
#   # Alternatively, set `client_secret_path` to read the secret from the file.
#   # It resolves environment variables, making integration to systemd's
#   # `LoadCredential` straightforward:
#   client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
#
#   # The amount of time a node is authenticated with OpenID until it expires
#   # and needs to reauthenticate.
#   # Setting the value to "0" will mean no expiry.
#   expiry: 180d
#
#   # Use the expiry from the token received from OpenID when the user logged
#   # in. This will typically lead to frequent need to reauthenticate and should
#   # only be enabled if you know what you are doing.
#   # Note: enabling this will cause `oidc.expiry` to be ignored.
#   use_expiry_from_token: false
#
#   # The OIDC scopes to use, defaults to "openid", "profile" and "email".
#   # Custom scopes can be configured as needed, be sure to always include the
#   # required "openid" scope.
#   scope: ["openid", "profile", "email"]
#
#   # Provide custom key/value pairs which get sent to the identity provider's
#   # authorization endpoint.
#   extra_params:
#     domain_hint: example.com
#
#   # Only accept users whose email domain is part of the allowed_domains list.
#   allowed_domains:
#     - example.com
#
#   # Only accept users whose email address is part of the allowed_users list.
#   allowed_users:
#     - alice@example.com
#
#   # Only accept users which are members of at least one group in the
#   # allowed_groups list.
#   allowed_groups:
#     - /headscale
#
#   # Optional: PKCE (Proof Key for Code Exchange) configuration
#   # PKCE adds an additional layer of security to the OAuth 2.0 authorization code flow
#   # by preventing authorization code interception attacks
#   # See https://datatracker.ietf.org/doc/html/rfc7636
#   pkce:
#     # Enable or disable PKCE support (default: false)
#     enabled: false
#
#     # PKCE method to use:
#     # - plain: Use plain code verifier
#     # - S256: Use SHA256 hashed code verifier (default, recommended)
#     method: S256

# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the
# control panel to instruct tailscale nodes to log their activity to a remote
# server. To disable logging on the client side, please refer to:
# https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging
logtail:
  # Enable logtail for tailscale nodes of this Headscale instance.
  # As there is currently no support for overriding the log server in Headscale, this is
  # disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
  enabled: false

# Enabling this option makes devices prefer a random port for WireGuard traffic over the
# default static port 41641. This option is intended as a workaround for some buggy
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
randomize_client_port: false
@jerkovicl commented on GitHub (Nov 12, 2025): i still see this error in logs even with latest version ``` DEBUG: Received error: handling register with auth key: AuthKey not found 2025-11-12T14:03:21.162Z [agent] DEBUG: Failed to connect to Tailnet: tsnet.Up: backend: handling register with auth key: AuthKey not found 2025-11-12T14:03:21.165Z [agent] WARN: Headplane agent exited with code 1 and signal undefined 2025-11-12T14:03:21.165Z [agent] WARN: Headplane agent will restart in 305.972 seconds (attempt 40) ``` EDIT: I am using docker setup on ubuntu with keycloak as oidc for headscale and headplane with tailscale container as exit node. Everything is working except agent integration since it throws the error mentioned above on startup @kradalby do you need headplane config? Docker compose : ``` headscale: image: "headscale/headscale:latest" container_name: "headscale" pull_policy: always restart: unless-stopped command: serve healthcheck: test: ["CMD", "headscale", "health"] dns: - 127.0.0.1 # Sets a backup server of your choosing in case DNSMasq has problems starting - 1.1.1.1 volumes: - ${USERDIR}/docker/headscale/config/config.yaml:/etc/headscale/config.yaml - ${USERDIR}/docker/headscale/config/extra_records.json:/var/lib/headscale/extra_records.json - ${USERDIR}/docker/headscale:/var/lib/headscale - /etc/localtime:/etc/localtime:ro environment: - PUID=${PUID} - PGID=${PGID} - TZ=${TZ} - HEADSCALE_OIDC_CLIENT_ID=${AUTH_CLIENT_ID} - HEADSCALE_OIDC_CLIENT_SECRET=${AUTH_CLIENT_SECRET} - HEADSCALE_OIDC_ISSUER=https://xxxxxx.${DOMAINNAME}/auth/realms/master networks: - traefik_proxy labels: # This label is absolutely necessary to help Headplane find Headscale. - "me.tale.headplane.target=headscale" - "traefik.enable=true" - "traefik.backend=headscale" - "traefik.frontend.rule=Host:xxxx.${DOMAINNAME}" - "traefik.port=8080" - "traefik.docker.network=traefik_proxy" headplane: image: "ghcr.io/tale/headplane:latest" container_name: "headplane" pull_policy: always restart: unless-stopped volumes: - ${USERDIR}/docker/headplane/config/config.yaml:/etc/headplane/config.yaml - ${USERDIR}/docker/headplane:/var/lib/headplane - ${USERDIR}/docker/headscale/config/config.yaml:/etc/headscale/config.yaml - ${USERDIR}/docker/headscale/config/extra_records.json:/var/lib/headscale/extra_records.json - /var/run/docker.sock:/var/run/docker.sock:ro environment: - PUID=${PUID} - PGID=${PGID} - TZ=${TZ} - HEADPLANE_LOAD_ENV_OVERRIDES=true - HEADPLANE_DEBUG_LOG=true - HEADPLANE_SERVER__COOKIE_SECRET=${HEADPLANE_COOKIE_SECRET} - HEADPLANE_HEADSCALE__PUBLIC_URL=https://xxxx.${DOMAINNAME} - HEADPLANE_INTEGRATION__AGENT_PRE_AUTHKEY=${HEADSCALE_PRE_AUTH_KEY} - HEADPLANE_OIDC__HEADSCALE_API_KEY=${HEADSCALE_API_KEY} - HEADPLANE_OIDC__ISSUER=https://xxxx.${DOMAINNAME}/auth/realms/master - HEADPLANE_OIDC__CLIENT_ID=${AUTH_CLIENT_ID} - HEADPLANE_OIDC__CLIENT_SECRET=${AUTH_CLIENT_SECRET} - HEADPLANE_OIDC__REDIRECT_URI=https://xxxx.${DOMAINNAME}/admin/oidc/callback networks: - traefik_proxy labels: - "traefik.enable=true" - "traefik.backend=headplane" - "traefik.frontend.rule=Host:xxxx.${DOMAINNAME}" - "traefik.port=3000" - "traefik.docker.network=traefik_proxy" # Tailscale Client - Operating as Tailnet Exit-Node tailscale: image: tailscale/tailscale:latest hostname: tailscale container_name: tailscale restart: unless-stopped healthcheck: test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:9002/healthz"] interval: 1m timeout: 10s retries: 3 start_period: 10s networks: - traefik_proxy cap_add: - net_admin sysctls: - net.ipv4.ip_forward=1 - net.ipv6.conf.all.forwarding=1 devices: - /dev/net/tun:/dev/net/tun volumes: - ${USERDIR}/docker/tailscale:/var/lib/tailscale environment: - TS_ENABLE_HEALTH_CHECK=true - TS_ACCEPT_DNS=true - TS_DEBUG_FIREWALL_MODE=auto - TS_USERSPACE=false - TS_STATE_DIR=/var/lib/tailscale - TS_AUTHKEY=${TAILSCALE_AUTH_KEY:?err} - TS_EXTRA_ARGS=--hostname=exit-node --advertise-exit-node --advertise-routes=${LOCAL_SUBNET:?err},${DOCKER_SUBNET:?err} --login-server=https://xxxxx.${DOMAINNAME:?err} ``` Headscale config: ``` --- # headscale will look for a configuration file named `config.yaml` (or `config.json`) in the following order: # # - `/etc/headscale` # - `~/.headscale` # - current working directory # The url clients will connect to. # Typically this will be a domain like: # # https://myheadscale.example.com:443 # server_url: https://xxx.example.com # Address to listen to / bind to on the server # # For production: #listen_addr: 127.0.0.1:8080 listen_addr: 0.0.0.0:8080 # Address to listen to /metrics and /debug, you may want # to keep this endpoint private to your internal network metrics_listen_addr: 127.0.0.1:9090 # Address to listen for gRPC. # gRPC is used for controlling a headscale server # remotely with the CLI # Note: Remote access _only_ works if you have # valid certificates. # # For production: #grpc_listen_addr: 127.0.0.1:50443 grpc_listen_addr: 0.0.0.0:50443 # Allow the gRPC admin interface to run in INSECURE # mode. This is not recommended as the traffic will # be unencrypted. Only enable if you know what you # are doing. grpc_allow_insecure: false # The Noise section includes specific configuration for the # TS2021 Noise protocol noise: # The Noise private key is used to encrypt the traffic between headscale and # Tailscale clients when using the new Noise-based protocol. A missing key # will be automatically generated. private_key_path: /var/lib/headscale/noise_private.key # List of IP prefixes to allocate tailaddresses from. # Each prefix consists of either an IPv4 or IPv6 address, # and the associated prefix length, delimited by a slash. # It must be within IP ranges supported by the Tailscale # client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48. # See below: # IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71 # IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33 # Any other range is NOT supported, and it will cause unexpected issues. prefixes: v4: 100.64.0.0/10 v6: fd7a:115c:a1e0::/48 # Strategy used for allocation of IPs to nodes, available options: # - sequential (default): assigns the next free IP from the previous given # IP. A best-effort approach is used and Headscale might leave holes in the # IP range or fill up existing holes in the IP range. # - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand). allocation: sequential # DERP is a relay system that Tailscale uses when a direct # connection cannot be established. # https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp # # headscale needs a list of DERP servers that can be presented # to the clients. derp: server: # If enabled, runs the embedded DERP server and merges it into the rest of the DERP config # The Headscale server_url defined above MUST be using https, DERP requires TLS to be in place enabled: false # Region ID to use for the embedded DERP server. # The local DERP prevails if the region ID collides with other region ID coming from # the regular DERP config. region_id: 999 # Region code and name are displayed in the Tailscale UI to identify a DERP region region_code: "headscale" region_name: "Headscale Embedded DERP" # Only allow clients associated with this server access verify_clients: true # Listens over UDP at the configured address for STUN connections - to help with NAT traversal. # When the embedded DERP server is enabled stun_listen_addr MUST be defined. # # For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/ stun_listen_addr: "0.0.0.0:3478" # Private key used to encrypt the traffic between headscale DERP and # Tailscale clients. A missing key will be automatically generated. private_key_path: /var/lib/headscale/derp_server_private.key # This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically, # it enables the creation of your very own DERP map entry using a locally available file with the parameter DERP.paths # If you enable the DERP server and set this to false, it is required to add the DERP server to the DERP map using DERP.paths automatically_add_embedded_derp_region: true # For better connection stability (especially when using an Exit-Node and DNS is not working), # it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using: ipv4: 198.51.100.1 ipv6: 2001:db8::1 # List of externally available DERP maps encoded in JSON urls: - https://controlplane.tailscale.com/derpmap/default # Locally available DERP map files encoded in YAML # # This option is mostly interesting for people hosting # their own DERP servers: # https://tailscale.com/kb/1118/custom-derp-servers/ # # paths: # - /etc/headscale/derp-example.yaml paths: [] # If enabled, a worker will be set up to periodically # refresh the given sources and update the derpmap # will be set up. auto_update_enabled: true # How often should we check for DERP updates? update_frequency: 24h # Disables the automatic check for headscale updates on startup disable_check_updates: false # Time before an inactive ephemeral node is deleted? ephemeral_node_inactivity_timeout: 30m database: # Database type. Available options: sqlite, postgres # Please note that using Postgres is highly discouraged as it is only supported for legacy reasons. # All new development, testing and optimisations are done with SQLite in mind. type: sqlite # Enable debug mode. This setting requires the log.level to be set to "debug" or "trace". debug: false # GORM configuration settings. gorm: # Enable prepared statements. prepare_stmt: true # Enable parameterized queries. parameterized_queries: true # Skip logging "record not found" errors. skip_err_record_not_found: true # Threshold for slow queries in milliseconds. slow_threshold: 1000 # SQLite config sqlite: path: /var/lib/headscale/db.sqlite # Enable WAL mode for SQLite. This is recommended for production environments. # https://www.sqlite.org/wal.html write_ahead_log: true # Maximum number of WAL file frames before the WAL file is automatically checkpointed. # https://www.sqlite.org/c3ref/wal_autocheckpoint.html # Set to 0 to disable automatic checkpointing. wal_autocheckpoint: 1000 # # Postgres config # Please note that using Postgres is highly discouraged as it is only supported for legacy reasons. # See database.type for more information. # postgres: # # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank. # host: localhost # port: 5432 # name: headscale # user: foo # pass: bar # max_open_conns: 10 # max_idle_conns: 10 # conn_max_idle_time_secs: 3600 # # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need # # in the 'ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1. # ssl: false ### TLS configuration # ## Let's encrypt / ACME # # headscale supports automatically requesting and setting up # TLS for a domain with Let's Encrypt. # # URL to ACME directory acme_url: https://acme-v02.api.letsencrypt.org/directory # Email to register with ACME provider acme_email: "" # Domain name to request a TLS certificate for: tls_letsencrypt_hostname: "" # Path to store certificates and metadata needed by # letsencrypt # For production: tls_letsencrypt_cache_dir: /var/lib/headscale/cache # Type of ACME challenge to use, currently supported types: # HTTP-01 or TLS-ALPN-01 # See: docs/ref/tls.md for more information tls_letsencrypt_challenge_type: HTTP-01 # When HTTP-01 challenge is chosen, letsencrypt must set up a # verification endpoint, and it will be listening on: # :http = port 80 tls_letsencrypt_listen: ":http" ## Use already defined certificates: tls_cert_path: "" tls_key_path: "" log: # Valid log levels: panic, fatal, error, warn, info, debug, trace level: info # Output formatting for logs: text or json format: text ## Policy # headscale supports Tailscale's ACL policies. # Please have a look to their KB to better # understand the concepts: https://tailscale.com/kb/1018/acls/ policy: # The mode can be "file" or "database" that defines # where the ACL policies are stored and read from. mode: database # If the mode is set to "file", the path to a # HuJSON file containing ACL policies. path: "" ## DNS # # headscale supports Tailscale's DNS configuration and MagicDNS. # Please have a look to their KB to better understand the concepts: # # - https://tailscale.com/kb/1054/dns/ # - https://tailscale.com/kb/1081/magicdns/ # - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/ # # Please note that for the DNS configuration to have any effect, # clients must have the `--accept-dns=true` option enabled. This is the # default for the Tailscale client. This option is enabled by default # in the Tailscale client. # # Setting _any_ of the configuration and `--accept-dns=true` on the # clients will integrate with the DNS manager on the client or # overwrite /etc/resolv.conf. # https://tailscale.com/kb/1235/resolv-conf # # If you want stop Headscale from managing the DNS configuration # all the fields under `dns` should be set to empty values. dns: # Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/). magic_dns: true # Defines the base domain to create the hostnames for MagicDNS. # This domain _must_ be different from the server_url domain. # `base_domain` must be a FQDN, without the trailing dot. # The FQDN of the hosts will be # `hostname.base_domain` (e.g., _myhost.example.com_). base_domain: tailnet # Whether to use the local DNS settings of a node or override the local DNS # settings (default) and force the use of Headscale's DNS configuration. override_local_dns: true # List of DNS servers to expose to clients. nameservers: global: - 1.1.1.1 - 1.0.0.1 - 2606:4700:4700::1111 - 2606:4700:4700::1001 # NextDNS (see https://tailscale.com/kb/1218/nextdns/). # "abc123" is example NextDNS ID, replace with yours. # - https://dns.nextdns.io/abc123 # Split DNS (see https://tailscale.com/kb/1054/dns/), # a map of domains and which DNS server to use for each. split: {} # foo.bar.com: # - 1.1.1.1 # darp.headscale.net: # - 1.1.1.1 # - 8.8.8.8 # Set custom DNS search domains. With MagicDNS enabled, # your tailnet base_domain is always the first search domain. search_domains: [] # Extra DNS records # so far only A and AAAA records are supported (on the tailscale side) # See: docs/ref/dns.md #extra_records: [] # - name: "grafana.myvpn.example.com" # type: "A" # value: "100.64.0.3" # # # you can also put it in one line # - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" } # # Alternatively, extra DNS records can be loaded from a JSON file. # Headscale processes this file on each change. extra_records_path: /var/lib/headscale/extra_records.json # Unix socket used for the CLI to connect without authentication # Note: for production you will want to set this to something like: unix_socket: /var/run/headscale/headscale.sock unix_socket_permission: "0770" # OpenID Connect oidc: # # Block startup until the identity provider is available and healthy. # only_start_if_oidc_is_available: true # # # OpenID Connect Issuer URL from the identity provider issuer: "https://your-oidc.issuer.com/path" # # # Client ID from the identity provider client_id: "your-oidc-client-id" # # # Client secret generated by the identity provider # # Note: client_secret and client_secret_path are mutually exclusive. client_secret: "your-oidc-client-secret" # # Alternatively, set `client_secret_path` to read the secret from the file. # # It resolves environment variables, making integration to systemd's # # `LoadCredential` straightforward: # client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret" # # # The amount of time a node is authenticated with OpenID until it expires # # and needs to reauthenticate. # # Setting the value to "0" will mean no expiry. # expiry: 180d # # # Use the expiry from the token received from OpenID when the user logged # # in. This will typically lead to frequent need to reauthenticate and should # # only be enabled if you know what you are doing. # # Note: enabling this will cause `oidc.expiry` to be ignored. # use_expiry_from_token: false # # # The OIDC scopes to use, defaults to "openid", "profile" and "email". # # Custom scopes can be configured as needed, be sure to always include the # # required "openid" scope. # scope: ["openid", "profile", "email"] # # # Provide custom key/value pairs which get sent to the identity provider's # # authorization endpoint. # extra_params: # domain_hint: example.com # # # Only accept users whose email domain is part of the allowed_domains list. # allowed_domains: # - example.com # # # Only accept users whose email address is part of the allowed_users list. # allowed_users: # - alice@example.com # # # Only accept users which are members of at least one group in the # # allowed_groups list. # allowed_groups: # - /headscale # # # Optional: PKCE (Proof Key for Code Exchange) configuration # # PKCE adds an additional layer of security to the OAuth 2.0 authorization code flow # # by preventing authorization code interception attacks # # See https://datatracker.ietf.org/doc/html/rfc7636 # pkce: # # Enable or disable PKCE support (default: false) # enabled: false # # # PKCE method to use: # # - plain: Use plain code verifier # # - S256: Use SHA256 hashed code verifier (default, recommended) # method: S256 # Logtail configuration # Logtail is Tailscales logging and auditing infrastructure, it allows the # control panel to instruct tailscale nodes to log their activity to a remote # server. To disable logging on the client side, please refer to: # https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging logtail: # Enable logtail for tailscale nodes of this Headscale instance. # As there is currently no support for overriding the log server in Headscale, this is # disabled by default. Enabling this will make your clients send logs to Tailscale Inc. enabled: false # Enabling this option makes devices prefer a random port for WireGuard traffic over the # default static port 41641. This option is intended as a workaround for some buggy # firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information. randomize_client_port: false ```
Author
Owner

@kradalby commented on GitHub (Nov 12, 2025):

We need a bit more info, description of your setup, config etc, see https://headscale.net/stable/ref/debug/

@kradalby commented on GitHub (Nov 12, 2025): We need a bit more info, description of your setup, config etc, see https://headscale.net/stable/ref/debug/
Author
Owner

@Haarolean commented on GitHub (Nov 22, 2025):

@kradalby I can confirm this is NOT fixed. What kind of details do you need? I don't use a db, my ACLs are boring, config is kinda standard

client:

tailscale  | 2025/11/22 21:00:39 control: RegisterReq: onode= node=[B2XgJ] fup=false nks=false
...
tailscale  | 2025/11/22 21:00:39 control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=false; authURL=false
tailscale  | 2025/11/22 21:00:39 health(warnable=login-state): error: You are logged out. The last login error was: authkey expired
tailscale  | 2025/11/22 21:00:39 Received error: authkey expired
tailscale  | backend error: authkey expired
tailscale  | boot: 2025/11/22 21:00:39 Sending SIGTERM to tailscaled
tailscale  | boot: 2025/11/22 21:00:39 failed to auth tailscale: failed to auth tailscale: tailscale up failed: exit status 1
tailscale exited with code 1

server (running 0.27.1):

178156d4f23618:/# headscale nodes list | grep server
68 | server           | xxxxxx             | [J39X0]    | [B2XgJ] | infra     | 100.64.0.12, fd7a:115c:a1e0::c  | false     | 2025-11-22 20:55:19 | 0001-01-01 00:00:00 | offline   | no

Also, please note that the other user experiencing this has replied with their config details above.

Could we reopen this?

@Haarolean commented on GitHub (Nov 22, 2025): @kradalby I can confirm this is NOT fixed. What kind of details do you need? I don't use a db, my ACLs are boring, config is kinda standard client: ``` tailscale | 2025/11/22 21:00:39 control: RegisterReq: onode= node=[B2XgJ] fup=false nks=false ... tailscale | 2025/11/22 21:00:39 control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=false; authURL=false tailscale | 2025/11/22 21:00:39 health(warnable=login-state): error: You are logged out. The last login error was: authkey expired tailscale | 2025/11/22 21:00:39 Received error: authkey expired tailscale | backend error: authkey expired tailscale | boot: 2025/11/22 21:00:39 Sending SIGTERM to tailscaled tailscale | boot: 2025/11/22 21:00:39 failed to auth tailscale: failed to auth tailscale: tailscale up failed: exit status 1 tailscale exited with code 1 ``` server (running 0.27.1): ``` 178156d4f23618:/# headscale nodes list | grep server 68 | server | xxxxxx | [J39X0] | [B2XgJ] | infra | 100.64.0.12, fd7a:115c:a1e0::c | false | 2025-11-22 20:55:19 | 0001-01-01 00:00:00 | offline | no ``` Also, please note that the other user experiencing this has replied with their config details above. Could we reopen this?
Author
Owner

@kradalby commented on GitHub (Nov 24, 2025):

@Haarolean, can you give me the output of list of the pre auth keys you have? Does this happen to hosts where the authkey still exists? still valid? not valid but exists?
Same info is interesting from other people seeing this issue.

It is not helpful to not give more information and referring to other people posting info. If we didnt manage to resolve it the first time, then we likely need more.

@kradalby commented on GitHub (Nov 24, 2025): @Haarolean, can you give me the output of list of the pre auth keys you have? Does this happen to hosts where the authkey still exists? still valid? not valid but exists? Same info is interesting from other people seeing this issue. It is not helpful to not give more information and referring to other people posting info. If we didnt manage to resolve it the first time, then we likely need more.
Author
Owner

@Haarolean commented on GitHub (Nov 24, 2025):

@kradalby while preparing a list of keys for you, I remembered that I've previously moved a node to a different user. Moving it back now made the node re-auth successfully.
So, the question is, how is that supposed to work? Should the preauthkeys be retained and not deleted even after they've been used? Should we also be able to move the keys between the users, like nodes?

This is the key used by that problematic node:
178156d4f23618:/# headscale preauthkeys list -u 1

ID | Key                                              | Reusable | Ephemeral | Used  | Expiration          | Created             | Tags
6  | REDACTED                                         | true     | false     | true  | 2024-11-23 17:25:25 | 2024-10-24 17:25:25 |

It is not helpful to not give more information

I believe I'm providing everything related without withholding :)

@Haarolean commented on GitHub (Nov 24, 2025): @kradalby while preparing a list of keys for you, I remembered that I've previously moved a node to a different user. Moving it back now made the node re-auth successfully. So, the question is, how is that supposed to work? Should the preauthkeys be retained and not deleted even after they've been used? Should we also be able to move the keys between the users, like nodes? This is the key used by that problematic node: 178156d4f23618:/# headscale preauthkeys list -u 1 ``` ID | Key | Reusable | Ephemeral | Used | Expiration | Created | Tags 6 | REDACTED | true | false | true | 2024-11-23 17:25:25 | 2024-10-24 17:25:25 | ``` >It is not helpful to not give more information I believe I'm providing everything related without withholding :)
Author
Owner

@kradalby commented on GitHub (Nov 24, 2025):

Should the preauthkeys be retained and not deleted even after they've been used? Should we also be able to move the keys between the users, like nodes?

I think this is the bug. Some variant of a check I wrote seem to wrongly check if either:

  • Auth key existed
  • was not expired
  • belonged to the same user

Which is not relevant after the first auth.

But I wanted to confirm it as I am finding it hard to reproduce outside of this theory.

@kradalby commented on GitHub (Nov 24, 2025): > Should the preauthkeys be retained and not deleted even after they've been used? Should we also be able to move the keys between the users, like nodes? I think this is the bug. Some variant of a check I wrote seem to wrongly check if either: - Auth key existed - was not expired - belonged to the same user Which is not relevant after the first auth. But I wanted to confirm it as I am finding it hard to reproduce outside of this theory.
Author
Owner

@Haarolean commented on GitHub (Nov 24, 2025):

Yeah it seems that's it. Moving the node back and forth reproduces the issue. Is there anything else I could provide?

@Haarolean commented on GitHub (Nov 24, 2025): Yeah it seems that's it. Moving the node back and forth reproduces the issue. Is there anything else I could provide?
Author
Owner

@kradalby commented on GitHub (Nov 25, 2025):

@Haarolean can you try: https://github.com/juanfont/headscale/pull/2917

@kradalby commented on GitHub (Nov 25, 2025): @Haarolean can you try: https://github.com/juanfont/headscale/pull/2917
Author
Owner

@Haarolean commented on GitHub (Nov 25, 2025):

@kradalby sorry I wasn't able to build your branch — spent an hour trying to fix, build, and publish my fork, got stuck on the token missing some permissions: https://github.com/Haarolean/headscale/actions/runs/19682568607/job/56379983390

@Haarolean commented on GitHub (Nov 25, 2025): @kradalby sorry I wasn't able to build your branch — spent an hour trying to fix, build, and publish my fork, got stuck on the token missing some permissions: https://github.com/Haarolean/headscale/actions/runs/19682568607/job/56379983390
Author
Owner

@kradalby commented on GitHub (Nov 30, 2025):

I've made a rc.1 release for 0.27.2 with fixes, would be great if you can test this and then close this (or give feedback so I can).

@kradalby commented on GitHub (Nov 30, 2025): I've made a [rc.1 release for 0.27.2](https://github.com/juanfont/headscale/releases/tag/v0.27.2-rc.1) with fixes, would be great if you can test this and then close this (or give feedback so I can).
Author
Owner

@Haarolean commented on GitHub (Dec 1, 2025):

@kradalby didn't help it seems. Moving the node back to the original user, the owner of the preauthkey, fixes the "expired" key, but that's it. I see some ACL changes in the changelog. Do you want me to upload my ACLs to take a look? Idk if that's related anyhow.

@Haarolean commented on GitHub (Dec 1, 2025): @kradalby didn't help it seems. Moving the node back to the original user, the owner of the preauthkey, fixes the "expired" key, but that's it. I see some ACL changes in the changelog. Do you want me to upload my ACLs to take a look? Idk if that's related anyhow.
Author
Owner

@kradalby commented on GitHub (Dec 2, 2025):

hmm, interesting, is this pre auth key in use? can you delete it and see what happens?

@kradalby commented on GitHub (Dec 2, 2025): hmm, interesting, is this pre auth key in use? can you delete it and see what happens?
Author
Owner

@kradalby commented on GitHub (Dec 2, 2025):

In general, I would advise against using the move feature, it is broken, and it will be removed in the next release.

edit: for people seeing this, the correct way to change user is to re-authenticate with the correct user or auth key

@kradalby commented on GitHub (Dec 2, 2025): In general, I would advise against using the `move` feature, it is broken, and it will be removed in the next release. edit: for people seeing this, the correct way to change user is to re-authenticate with the correct user or auth key
Author
Owner

@Haarolean commented on GitHub (Dec 13, 2025):

@kradalby is there a bug at all then? given move is deprecated, it shouldn't be possible to move a node between users and expect it to work with the key of a previous user. I think we should close the issue if that's the case.

@Haarolean commented on GitHub (Dec 13, 2025): @kradalby is there a bug _at all_ then? given `move` is deprecated, it shouldn't be possible to move a node between users and expect it to work with the key of a previous user. I think we should close the issue if that's the case.
Author
Owner

@kradalby commented on GitHub (Dec 14, 2025):

Yes, I think that is reasonable, it will be "resolved" in 0.28 as move is removed.

@kradalby commented on GitHub (Dec 14, 2025): Yes, I think that is reasonable, it will be "resolved" in 0.28 as move is removed.
Author
Owner

@Haarolean commented on GitHub (Dec 14, 2025):

@kradalby thank you very much for your help and maintaining headscale ❤️

@Haarolean commented on GitHub (Dec 14, 2025): @kradalby thank you very much for your help and maintaining headscale ❤️
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#1120