[Bug] Reauthentication via OIDC Breaks Connectivity Until Manual Disconnect and Reconnect #959

Closed
opened 2025-12-29 02:26:43 +01:00 by adam · 3 comments
Owner

Originally created by @FlorinPeter on GitHub (Mar 2, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After reauthenticating a Tailscale client via OIDC due to an expiring nodekey (within less than 24 hours), the client loses connectivity to other Tailscale clients and subnet routes. Interestingly, direct connections to Tailscale clients on the same local network remain functional. The nodekeys are updated correctly across all clients, as confirmed with tailscale status --json. The issue can be resolved by manually disconnecting and reconnecting the reauthenticated Tailscale client.

Expected Behavior

After reauthentication via OIDC, the Tailscale client should automatically refresh its connections and maintain seamless access to other Tailscale clients and subnet routes without requiring manual intervention.
Actual Behavior
Post-reauthentication, the client loses connectivity to other Tailscale clients and subnet routes (except for direct local network connections) until it is manually disconnected and reconnected.

Steps To Reproduce

  1. Set up a Tailscale network using headscale as the control server.
  2. Configure OIDC for authentication.
  3. Wait for a node’s nodekey to approach expiration (less than 24 hours remaining).
  4. On the Tailscale client, click the “reauthenticate” button, which redirects to the OIDC server for authentication.
  5. Complete the authentication process on the OIDC server.
  6. Attempt to connect to other Tailscale clients or access subnet routes.
  7. Observe that these connections fail, except for direct connections to clients on the same local network.
  8. Manually disconnect and reconnect the Tailscale client.
  9. Confirm that connectivity to all clients and subnet routes is restored.

Environment

- OS: MacOS 15.3.1
- Headscale version: 0.25.1
- Tailscale version: 1.80.2

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Anything else?

Additional Context

  • This issue occurs consistently following OIDC reauthentication.
  • The nodekeys are verified to update correctly after the process, suggesting the authentication itself succeeds.
  • Direct connections working while relayed connections fail might indicate an issue with the client’s state refresh or coordination with the control server after reauthentication.
  • I am uncertain whether this is specific to headscale or a broader Tailscale client issue.
  • I am not using headscale DERP
Originally created by @FlorinPeter on GitHub (Mar 2, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior After reauthenticating a Tailscale client via OIDC due to an expiring nodekey (within less than 24 hours), the client loses connectivity to other Tailscale clients and subnet routes. Interestingly, direct connections to Tailscale clients on the same local network remain functional. The nodekeys are updated correctly across all clients, as confirmed with tailscale status --json. The issue can be resolved by manually disconnecting and reconnecting the reauthenticated Tailscale client. ### Expected Behavior After reauthentication via OIDC, the Tailscale client should automatically refresh its connections and maintain seamless access to other Tailscale clients and subnet routes without requiring manual intervention. Actual Behavior Post-reauthentication, the client loses connectivity to other Tailscale clients and subnet routes (except for direct local network connections) until it is manually disconnected and reconnected. ### Steps To Reproduce 1. Set up a Tailscale network using headscale as the control server. 2. Configure OIDC for authentication. 3. Wait for a node’s nodekey to approach expiration (less than 24 hours remaining). 4. On the Tailscale client, click the “reauthenticate” button, which redirects to the OIDC server for authentication. 5. Complete the authentication process on the OIDC server. 6. Attempt to connect to other Tailscale clients or access subnet routes. 7. Observe that these connections fail, except for direct connections to clients on the same local network. 8. Manually disconnect and reconnect the Tailscale client. 9. Confirm that connectivity to all clients and subnet routes is restored. ### Environment ```markdown - OS: MacOS 15.3.1 - Headscale version: 0.25.1 - Tailscale version: 1.80.2 ``` ### Runtime environment - [ ] Headscale is behind a (reverse) proxy - [x] Headscale runs in a container ### Anything else? Additional Context * This issue occurs consistently following OIDC reauthentication. * The nodekeys are verified to update correctly after the process, suggesting the authentication itself succeeds. * Direct connections working while relayed connections fail might indicate an issue with the client’s state refresh or coordination with the control server after reauthentication. * I am uncertain whether this is specific to headscale or a broader Tailscale client issue. * I am not using headscale DERP
adam added the bugOIDCwell described ❤️ labels 2025-12-29 02:26:43 +01:00
adam closed this issue 2025-12-29 02:26:43 +01:00
Author
Owner

@kradalby commented on GitHub (Sep 10, 2025):

I'm having a hard time replicating this, which frustrates me. I'm sure its a headscale issue, I think Tailscale would have had customers complain louder about it.

The thing from your description that puzzles me is that you confirm that the new keys are being propagated.

What would be helpful is the output of tailscale debug netmap on a node (the one being kicked off) and a peer of that node:

  • when it works
  • just after you have authenticated (and connections fail)
  • just after you have restarted the client

The logs from Headscale and those tailscale clients would also help.

@kradalby commented on GitHub (Sep 10, 2025): I'm having a hard time replicating this, which frustrates me. I'm sure its a headscale issue, I think Tailscale would have had customers complain louder about it. The thing from your description that puzzles me is that you confirm that the new keys are being propagated. What would be helpful is the output of `tailscale debug netmap` on a node (the one being kicked off) and a peer of that node: - when it works - just after you have authenticated (and connections fail) - just after you have restarted the client The logs from Headscale and those tailscale clients would also help.
Author
Owner

@JeanneD4RK commented on GitHub (Sep 24, 2025):

I was not able to replicate this issue with Azure OIDC (with a 12h TTL for the session), however something strange happens to the latency starting when I click to reauthenticate until the authentication is done

Pinging XXXX [100.64.0.3] with 32 bytes of data:
Reply from 100.64.0.3: bytes=32 time=1ms TTL=64
Reply from 100.64.0.3: bytes=32 time=1ms TTL=64
---- Clicked tailscale icon to reauth
Reply from 100.64.0.3: bytes=32 time=1481ms TTL=57
Reply from 100.64.0.3: bytes=32 time=685ms TTL=57
Reply from 100.64.0.3: bytes=32 time=87ms TTL=57
Reply from 100.64.0.3: bytes=32 time=244ms TTL=57
Reply from 100.64.0.3: bytes=32 time=677ms TTL=57
Reply from 100.64.0.3: bytes=32 time=1268ms TTL=57
Reply from 100.64.0.3: bytes=32 time=1148ms TTL=57
Reply from 100.64.0.3: bytes=32 time=919ms TTL=57
Reply from 100.64.0.3: bytes=32 time=832ms TTL=57
Reply from 100.64.0.3: bytes=32 time=443ms TTL=57
Reply from 100.64.0.3: bytes=32 time=16ms TTL=57
Reply from 100.64.0.3: bytes=32 time=121ms TTL=57
Reply from 100.64.0.3: bytes=32 time=204ms TTL=64
---- Authentication done
Reply from 100.64.0.3: bytes=32 time=1ms TTL=64
Reply from 100.64.0.3: bytes=32 time=1ms TTL=64
Reply from 100.64.0.3: bytes=32 time=1ms TTL=64

@JeanneD4RK commented on GitHub (Sep 24, 2025): I was not able to replicate this issue with Azure OIDC (with a 12h TTL for the session), however something strange happens to the latency starting when I click to reauthenticate until the authentication is done ``` Pinging XXXX [100.64.0.3] with 32 bytes of data: Reply from 100.64.0.3: bytes=32 time=1ms TTL=64 Reply from 100.64.0.3: bytes=32 time=1ms TTL=64 ---- Clicked tailscale icon to reauth Reply from 100.64.0.3: bytes=32 time=1481ms TTL=57 Reply from 100.64.0.3: bytes=32 time=685ms TTL=57 Reply from 100.64.0.3: bytes=32 time=87ms TTL=57 Reply from 100.64.0.3: bytes=32 time=244ms TTL=57 Reply from 100.64.0.3: bytes=32 time=677ms TTL=57 Reply from 100.64.0.3: bytes=32 time=1268ms TTL=57 Reply from 100.64.0.3: bytes=32 time=1148ms TTL=57 Reply from 100.64.0.3: bytes=32 time=919ms TTL=57 Reply from 100.64.0.3: bytes=32 time=832ms TTL=57 Reply from 100.64.0.3: bytes=32 time=443ms TTL=57 Reply from 100.64.0.3: bytes=32 time=16ms TTL=57 Reply from 100.64.0.3: bytes=32 time=121ms TTL=57 Reply from 100.64.0.3: bytes=32 time=204ms TTL=64 ---- Authentication done Reply from 100.64.0.3: bytes=32 time=1ms TTL=64 Reply from 100.64.0.3: bytes=32 time=1ms TTL=64 Reply from 100.64.0.3: bytes=32 time=1ms TTL=64 ```
Author
Owner

@kradalby commented on GitHub (Dec 20, 2025):

Can you give the new beta a go? I think it is fixed, and I'll close this for now.

@kradalby commented on GitHub (Dec 20, 2025): Can you give the new beta a go? I think it is fixed, and I'll close this for now.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#959