Compare commits

..

18 Commits

Author SHA1 Message Date
Kristoffer Dalby
c6d399a66c changelog: prep for 0.27.2 rc
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-30 19:10:56 +01:00
Kristoffer Dalby
4fe5cbe703 hscontrol/oidc: fix ACL policy not applied to new OIDC nodes (#2890)
Fixes #2888
Fixes #2896
2025-11-30 19:02:15 +01:00
Vitalij Dovhanyc
7e8cee6b10 chore: fix filterHash to work with autogroup:self in the acls (#2882) 2025-11-30 15:54:16 +01:00
Kristoffer Dalby
7f1631c4f1 auth: ensure machines are allowed in when pak change (#2917) 2025-11-30 15:51:01 +01:00
Kristoffer Dalby
f658a8eacd mkdocs: 0.27.1
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-11 13:17:02 -06:00
Kristoffer Dalby
785168a7b8 changelog: prepare for 0.27.1
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-11 13:17:02 -06:00
Kristoffer Dalby
3bd4ecd9cd fix: preserve node expiry when tailscaled restarts
When tailscaled restarts, it sends RegisterRequest with Auth=nil and
Expiry=zero. Previously this was treated as a logout because
time.Time{}.Before(time.Now()) returns true.

Add early return in handleRegister() to detect this case and preserve
the existing node state without modification.

Fixes #2862
2025-11-11 12:47:48 -06:00
Kristoffer Dalby
3455d1cb59 hscontrol/db: fix RenameUser to use Updates()
RenameUser only modifies Name field, should use Updates() not Save().
2025-11-11 12:47:48 -06:00
Kristoffer Dalby
ddd31ba774 hscontrol: use Updates() instead of Save() for partial updates
Changed UpdateUser and re-registration flows to use Updates() which only
writes modified fields, preventing unintended overwrites of unchanged fields.

Also updated UsePreAuthKey to use Model().Update() for single field updates
and removed unused NodeSave wrapper.
2025-11-11 12:47:48 -06:00
Kristoffer Dalby
4a8dc2d445 hscontrol/state,db: preserve node expiry on MapRequest updates
Fixes a regression introduced in v0.27.0 where node expiry times were
being reset to zero when tailscaled restarts and sends a MapRequest.

The issue was caused by using GORM's Save() method in persistNodeToDB(),
which overwrites ALL fields including zero values. When a MapRequest
updates a node (without including expiry information), Save() would
overwrite the database expiry field with a zero value.

Changed to use Updates() which only updates non-zero values, preserving
existing database values when struct pointer fields are nil.

In BackfillNodeIPs, we need to explicitly update IPv4/IPv6 fields even
when nil (to remove IPs), so we use Select() to specify those fields.

Added regression test that validates expiry is preserved after MapRequest.

Fixes #2862
2025-11-11 12:47:48 -06:00
Kristoffer Dalby
773a46a968 integration: add test to replicate #2862
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-11 12:47:48 -06:00
Kristoffer Dalby
4728a2ba9e hscontrol/state: allow expired auth keys for node re-registration
Skip auth key validation for existing nodes re-registering with the same
NodeKey. Pre-auth keys are only required for initial authentication.

NodeKey rotation still requires a valid auth key as it is a security-sensitive
operation that changes the node's cryptographic identity.

Fixes #2830
2025-11-11 05:12:59 -06:00
Florian Preinstorfer
abed534628 Document how to restrict access to exit nodes per user/group
Updates: #2855
Ref: #2784
2025-11-11 11:51:35 +01:00
Kristoffer Dalby
21e3f2598d policy: fix issue where non existent user results in empty ssh pol
When we encounter a source we cannot resolve, we skipped the whole rule,
even if some of the srcs could be resolved. In this case, if we had one user
that exists and one that does not.

In the regular policy, we log this, and still let a rule be created from what
does exist, while in the SSH policy we did not.

This commit fixes it so the behaviour is the same.

Fixes #2863

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-10 20:34:12 +01:00
Kristoffer Dalby
a28d9bed6d policy: reproduce 2863 in test
reproduce that if a user does not exist, the ssh policy ends up empty

Updates #2863

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-10 20:34:12 +01:00
Kristoffer Dalby
28faf8cd71 db: add defensive removal of old indicies
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-10 20:07:29 +01:00
Kristoffer Dalby
5a2ee0c391 db: add comment about removing migrations
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2025-11-10 17:32:39 +01:00
Andrey Bobelev
5cd15c3656 fix: make state cookies valid when client uses multiple login URLs
On Windows, if the user clicks the Tailscale icon in the system tray,
it opens a login URL in the browser.

When the login URL is opened, `state/nonce` cookies are set for that particular URL.

If the user clicks the icon again, a new login URL is opened in the browser,
and new cookies are set.

If the user proceeds with auth in the first tab,
the redirect results in a "state did not match" error.

This patch ensures that each opened login URL sets an individual cookie
that remains valid on the `/oidc/callback` page.

`TestOIDCMultipleOpenedLoginUrls` illustrates and tests this behavior.
2025-11-10 16:27:46 +01:00
40 changed files with 2696 additions and 295 deletions

View File

@@ -5,8 +5,6 @@ on:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}

View File

@@ -32,13 +32,19 @@ jobs:
- TestAuthKeyLogoutAndReloginSameUser
- TestAuthKeyLogoutAndReloginNewUser
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
- TestAuthKeyDeleteKey
- TestAuthKeyLogoutAndReloginRoutesPreserved
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestOIDC024UserCreation
- TestOIDCAuthenticationWithPKCE
- TestOIDCReloginSameNodeNewUser
- TestOIDCFollowUpUrl
- TestOIDCMultipleOpenedLoginUrls
- TestOIDCReloginSameNodeSameUser
- TestOIDCExpiryAfterRestart
- TestOIDCACLPolicyOnJoin
- TestOIDCReloginSameUserRoutesPreserved
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndReloginSameUser
- TestAuthWebFlowLogoutAndReloginNewUser

View File

@@ -4,8 +4,48 @@
### Changes
## 0.27.2 (2025-xx-xx)
### Changes
- Fix ACL policy not applied to new OIDC nodes until client restart
[#2890](https://github.com/juanfont/headscale/pull/2890)
- Fix autogroup:self preventing visibility of nodes matched by other ACL rules
[#2882](https://github.com/juanfont/headscale/pull/2882)
- Fix nodes being rejected after pre-authentication key expiration
[#2917](https://github.com/juanfont/headscale/pull/2917)
## 0.27.1 (2025-11-11)
**Minimum supported Tailscale client version: v1.64.0**
### Changes
- Expire nodes with a custom timestamp
[#2828](https://github.com/juanfont/headscale/pull/2828)
- Fix issue where node expiry was reset when tailscaled restarts
[#2875](https://github.com/juanfont/headscale/pull/2875)
- Fix OIDC authentication when multiple login URLs are opened
[#2861](https://github.com/juanfont/headscale/pull/2861)
- Fix node re-registration failing with expired auth keys
[#2859](https://github.com/juanfont/headscale/pull/2859)
- Remove old unused database tables and indices
[#2844](https://github.com/juanfont/headscale/pull/2844)
[#2872](https://github.com/juanfont/headscale/pull/2872)
- Ignore litestream tables during database validation
[#2843](https://github.com/juanfont/headscale/pull/2843)
- Fix exit node visibility to respect ACL rules
[#2855](https://github.com/juanfont/headscale/pull/2855)
- Fix SSH policy becoming empty when unknown user is referenced
[#2874](https://github.com/juanfont/headscale/pull/2874)
- Fix policy validation when using bypass-grpc mode
[#2854](https://github.com/juanfont/headscale/pull/2854)
- Fix autogroup:self interaction with other ACL rules
[#2842](https://github.com/juanfont/headscale/pull/2842)
- Fix flaky DERP map shuffle test
[#2848](https://github.com/juanfont/headscale/pull/2848)
- Use current stable base images for Debian and Alpine containers
[#2827](https://github.com/juanfont/headscale/pull/2827)
## 0.27.0 (2025-10-27)
@@ -89,7 +129,8 @@ the code base over time and make it more correct and efficient.
[#2692](https://github.com/juanfont/headscale/pull/2692)
- Policy: Zero or empty destination port is no longer allowed
[#2606](https://github.com/juanfont/headscale/pull/2606)
- Stricter hostname validation [#2383](https://github.com/juanfont/headscale/pull/2383)
- Stricter hostname validation
[#2383](https://github.com/juanfont/headscale/pull/2383)
- Hostnames must be valid DNS labels (2-63 characters, alphanumeric and
hyphens only, cannot start/end with hyphen)
- **Client Registration (New Nodes)**: Invalid hostnames are automatically
@@ -144,7 +185,8 @@ the code base over time and make it more correct and efficient.
[#2776](https://github.com/juanfont/headscale/pull/2776)
- EXPERIMENTAL: Add support for `autogroup:self`
[#2789](https://github.com/juanfont/headscale/pull/2789)
- Add healthcheck command [#2659](https://github.com/juanfont/headscale/pull/2659)
- Add healthcheck command
[#2659](https://github.com/juanfont/headscale/pull/2659)
## 0.26.1 (2025-06-06)

View File

@@ -34,6 +34,7 @@ func init() {
preauthkeysCmd.AddCommand(listPreAuthKeys)
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
preauthkeysCmd.AddCommand(deletePreAuthKeyCmd)
createPreAuthKeyCmd.PersistentFlags().
Bool("reusable", false, "Make the preauthkey reusable")
createPreAuthKeyCmd.PersistentFlags().
@@ -232,3 +233,43 @@ var expirePreAuthKeyCmd = &cobra.Command{
SuccessOutput(response, "Key expired", output)
},
}
var deletePreAuthKeyCmd = &cobra.Command{
Use: "delete KEY",
Short: "Delete a preauthkey",
Aliases: []string{"del", "rm", "d"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetUint64("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeletePreAuthKeyRequest{
User: user,
Key: args[0],
}
response, err := client.DeletePreAuthKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Pre Auth Key: %s\n", err),
output,
)
}
SuccessOutput(response, "Key deleted", output)
},
}

View File

@@ -202,6 +202,18 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
}
// Pass through all HEADSCALE_INTEGRATION_* environment variables
for _, e := range os.Environ() {
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") {
// Skip the ones we already set explicitly
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") ||
strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") {
continue
}
env = append(env, e)
}
}
containerConfig := &container.Config{
Image: "golang:" + config.GoVersion,
Cmd: goTestCmd,

View File

@@ -216,6 +216,39 @@ nodes.
}
```
### Restrict access to exit nodes per user or group
A user can use _any_ of the available exit nodes with `autogroup:internet`. Alternatively, the ACL snippet below assigns
each user a specific exit node while hiding all other exit nodes. The user `alice` can only use exit node `exit1` while
user `bob` can only use exit node `exit2`.
```json title="Assign each user a dedicated exit node"
{
"hosts": {
"exit1": "100.64.0.1/32",
"exit2": "100.64.0.2/32"
},
"acls": [
{
"action": "accept",
"src": ["alice@"],
"dst": ["exit1:*"]
},
{
"action": "accept",
"src": ["bob@"],
"dst": ["exit2:*"]
}
]
}
```
!!! warning
- The above implementation is Headscale specific and will likely be removed once [support for
`via`](https://github.com/juanfont/headscale/issues/2409) is available.
- Beware that a user can also connect to any port of the exit node itself.
### Automatically approve an exit node with auto approvers
The initial setup of an exit node usually requires manual approval on the control server before it can be used by a node

View File

@@ -109,7 +109,7 @@ const file_headscale_v1_headscale_proto_rawDesc = "" +
"\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x19headscale/v1/policy.proto\"\x0f\n" +
"\rHealthRequest\"E\n" +
"\x0eHealthResponse\x123\n" +
"\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\x80\x17\n" +
"\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\xff\x17\n" +
"\x10HeadscaleService\x12h\n" +
"\n" +
"CreateUser\x12\x1f.headscale.v1.CreateUserRequest\x1a .headscale.v1.CreateUserResponse\"\x17\x82\xd3\xe4\x93\x02\x11:\x01*\"\f/api/v1/user\x12\x80\x01\n" +
@@ -119,7 +119,8 @@ const file_headscale_v1_headscale_proto_rawDesc = "" +
"DeleteUser\x12\x1f.headscale.v1.DeleteUserRequest\x1a .headscale.v1.DeleteUserResponse\"\x19\x82\xd3\xe4\x93\x02\x13*\x11/api/v1/user/{id}\x12b\n" +
"\tListUsers\x12\x1e.headscale.v1.ListUsersRequest\x1a\x1f.headscale.v1.ListUsersResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/user\x12\x80\x01\n" +
"\x10CreatePreAuthKey\x12%.headscale.v1.CreatePreAuthKeyRequest\x1a&.headscale.v1.CreatePreAuthKeyResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/preauthkey\x12\x87\x01\n" +
"\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12z\n" +
"\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12}\n" +
"\x10DeletePreAuthKey\x12%.headscale.v1.DeletePreAuthKeyRequest\x1a&.headscale.v1.DeletePreAuthKeyResponse\"\x1a\x82\xd3\xe4\x93\x02\x14*\x12/api/v1/preauthkey\x12z\n" +
"\x0fListPreAuthKeys\x12$.headscale.v1.ListPreAuthKeysRequest\x1a%.headscale.v1.ListPreAuthKeysResponse\"\x1a\x82\xd3\xe4\x93\x02\x14\x12\x12/api/v1/preauthkey\x12}\n" +
"\x0fDebugCreateNode\x12$.headscale.v1.DebugCreateNodeRequest\x1a%.headscale.v1.DebugCreateNodeResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/debug/node\x12f\n" +
"\aGetNode\x12\x1c.headscale.v1.GetNodeRequest\x1a\x1d.headscale.v1.GetNodeResponse\"\x1e\x82\xd3\xe4\x93\x02\x18\x12\x16/api/v1/node/{node_id}\x12n\n" +
@@ -165,48 +166,50 @@ var file_headscale_v1_headscale_proto_goTypes = []any{
(*ListUsersRequest)(nil), // 5: headscale.v1.ListUsersRequest
(*CreatePreAuthKeyRequest)(nil), // 6: headscale.v1.CreatePreAuthKeyRequest
(*ExpirePreAuthKeyRequest)(nil), // 7: headscale.v1.ExpirePreAuthKeyRequest
(*ListPreAuthKeysRequest)(nil), // 8: headscale.v1.ListPreAuthKeysRequest
(*DebugCreateNodeRequest)(nil), // 9: headscale.v1.DebugCreateNodeRequest
(*GetNodeRequest)(nil), // 10: headscale.v1.GetNodeRequest
(*SetTagsRequest)(nil), // 11: headscale.v1.SetTagsRequest
(*SetApprovedRoutesRequest)(nil), // 12: headscale.v1.SetApprovedRoutesRequest
(*RegisterNodeRequest)(nil), // 13: headscale.v1.RegisterNodeRequest
(*DeleteNodeRequest)(nil), // 14: headscale.v1.DeleteNodeRequest
(*ExpireNodeRequest)(nil), // 15: headscale.v1.ExpireNodeRequest
(*RenameNodeRequest)(nil), // 16: headscale.v1.RenameNodeRequest
(*ListNodesRequest)(nil), // 17: headscale.v1.ListNodesRequest
(*MoveNodeRequest)(nil), // 18: headscale.v1.MoveNodeRequest
(*BackfillNodeIPsRequest)(nil), // 19: headscale.v1.BackfillNodeIPsRequest
(*CreateApiKeyRequest)(nil), // 20: headscale.v1.CreateApiKeyRequest
(*ExpireApiKeyRequest)(nil), // 21: headscale.v1.ExpireApiKeyRequest
(*ListApiKeysRequest)(nil), // 22: headscale.v1.ListApiKeysRequest
(*DeleteApiKeyRequest)(nil), // 23: headscale.v1.DeleteApiKeyRequest
(*GetPolicyRequest)(nil), // 24: headscale.v1.GetPolicyRequest
(*SetPolicyRequest)(nil), // 25: headscale.v1.SetPolicyRequest
(*CreateUserResponse)(nil), // 26: headscale.v1.CreateUserResponse
(*RenameUserResponse)(nil), // 27: headscale.v1.RenameUserResponse
(*DeleteUserResponse)(nil), // 28: headscale.v1.DeleteUserResponse
(*ListUsersResponse)(nil), // 29: headscale.v1.ListUsersResponse
(*CreatePreAuthKeyResponse)(nil), // 30: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyResponse)(nil), // 31: headscale.v1.ExpirePreAuthKeyResponse
(*ListPreAuthKeysResponse)(nil), // 32: headscale.v1.ListPreAuthKeysResponse
(*DebugCreateNodeResponse)(nil), // 33: headscale.v1.DebugCreateNodeResponse
(*GetNodeResponse)(nil), // 34: headscale.v1.GetNodeResponse
(*SetTagsResponse)(nil), // 35: headscale.v1.SetTagsResponse
(*SetApprovedRoutesResponse)(nil), // 36: headscale.v1.SetApprovedRoutesResponse
(*RegisterNodeResponse)(nil), // 37: headscale.v1.RegisterNodeResponse
(*DeleteNodeResponse)(nil), // 38: headscale.v1.DeleteNodeResponse
(*ExpireNodeResponse)(nil), // 39: headscale.v1.ExpireNodeResponse
(*RenameNodeResponse)(nil), // 40: headscale.v1.RenameNodeResponse
(*ListNodesResponse)(nil), // 41: headscale.v1.ListNodesResponse
(*MoveNodeResponse)(nil), // 42: headscale.v1.MoveNodeResponse
(*BackfillNodeIPsResponse)(nil), // 43: headscale.v1.BackfillNodeIPsResponse
(*CreateApiKeyResponse)(nil), // 44: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 45: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 46: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyResponse)(nil), // 47: headscale.v1.DeleteApiKeyResponse
(*GetPolicyResponse)(nil), // 48: headscale.v1.GetPolicyResponse
(*SetPolicyResponse)(nil), // 49: headscale.v1.SetPolicyResponse
(*DeletePreAuthKeyRequest)(nil), // 8: headscale.v1.DeletePreAuthKeyRequest
(*ListPreAuthKeysRequest)(nil), // 9: headscale.v1.ListPreAuthKeysRequest
(*DebugCreateNodeRequest)(nil), // 10: headscale.v1.DebugCreateNodeRequest
(*GetNodeRequest)(nil), // 11: headscale.v1.GetNodeRequest
(*SetTagsRequest)(nil), // 12: headscale.v1.SetTagsRequest
(*SetApprovedRoutesRequest)(nil), // 13: headscale.v1.SetApprovedRoutesRequest
(*RegisterNodeRequest)(nil), // 14: headscale.v1.RegisterNodeRequest
(*DeleteNodeRequest)(nil), // 15: headscale.v1.DeleteNodeRequest
(*ExpireNodeRequest)(nil), // 16: headscale.v1.ExpireNodeRequest
(*RenameNodeRequest)(nil), // 17: headscale.v1.RenameNodeRequest
(*ListNodesRequest)(nil), // 18: headscale.v1.ListNodesRequest
(*MoveNodeRequest)(nil), // 19: headscale.v1.MoveNodeRequest
(*BackfillNodeIPsRequest)(nil), // 20: headscale.v1.BackfillNodeIPsRequest
(*CreateApiKeyRequest)(nil), // 21: headscale.v1.CreateApiKeyRequest
(*ExpireApiKeyRequest)(nil), // 22: headscale.v1.ExpireApiKeyRequest
(*ListApiKeysRequest)(nil), // 23: headscale.v1.ListApiKeysRequest
(*DeleteApiKeyRequest)(nil), // 24: headscale.v1.DeleteApiKeyRequest
(*GetPolicyRequest)(nil), // 25: headscale.v1.GetPolicyRequest
(*SetPolicyRequest)(nil), // 26: headscale.v1.SetPolicyRequest
(*CreateUserResponse)(nil), // 27: headscale.v1.CreateUserResponse
(*RenameUserResponse)(nil), // 28: headscale.v1.RenameUserResponse
(*DeleteUserResponse)(nil), // 29: headscale.v1.DeleteUserResponse
(*ListUsersResponse)(nil), // 30: headscale.v1.ListUsersResponse
(*CreatePreAuthKeyResponse)(nil), // 31: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyResponse)(nil), // 32: headscale.v1.ExpirePreAuthKeyResponse
(*DeletePreAuthKeyResponse)(nil), // 33: headscale.v1.DeletePreAuthKeyResponse
(*ListPreAuthKeysResponse)(nil), // 34: headscale.v1.ListPreAuthKeysResponse
(*DebugCreateNodeResponse)(nil), // 35: headscale.v1.DebugCreateNodeResponse
(*GetNodeResponse)(nil), // 36: headscale.v1.GetNodeResponse
(*SetTagsResponse)(nil), // 37: headscale.v1.SetTagsResponse
(*SetApprovedRoutesResponse)(nil), // 38: headscale.v1.SetApprovedRoutesResponse
(*RegisterNodeResponse)(nil), // 39: headscale.v1.RegisterNodeResponse
(*DeleteNodeResponse)(nil), // 40: headscale.v1.DeleteNodeResponse
(*ExpireNodeResponse)(nil), // 41: headscale.v1.ExpireNodeResponse
(*RenameNodeResponse)(nil), // 42: headscale.v1.RenameNodeResponse
(*ListNodesResponse)(nil), // 43: headscale.v1.ListNodesResponse
(*MoveNodeResponse)(nil), // 44: headscale.v1.MoveNodeResponse
(*BackfillNodeIPsResponse)(nil), // 45: headscale.v1.BackfillNodeIPsResponse
(*CreateApiKeyResponse)(nil), // 46: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 47: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 48: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyResponse)(nil), // 49: headscale.v1.DeleteApiKeyResponse
(*GetPolicyResponse)(nil), // 50: headscale.v1.GetPolicyResponse
(*SetPolicyResponse)(nil), // 51: headscale.v1.SetPolicyResponse
}
var file_headscale_v1_headscale_proto_depIdxs = []int32{
2, // 0: headscale.v1.HeadscaleService.CreateUser:input_type -> headscale.v1.CreateUserRequest
@@ -215,52 +218,54 @@ var file_headscale_v1_headscale_proto_depIdxs = []int32{
5, // 3: headscale.v1.HeadscaleService.ListUsers:input_type -> headscale.v1.ListUsersRequest
6, // 4: headscale.v1.HeadscaleService.CreatePreAuthKey:input_type -> headscale.v1.CreatePreAuthKeyRequest
7, // 5: headscale.v1.HeadscaleService.ExpirePreAuthKey:input_type -> headscale.v1.ExpirePreAuthKeyRequest
8, // 6: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest
9, // 7: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest
10, // 8: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest
11, // 9: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest
12, // 10: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest
13, // 11: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest
14, // 12: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest
15, // 13: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest
16, // 14: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest
17, // 15: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest
18, // 16: headscale.v1.HeadscaleService.MoveNode:input_type -> headscale.v1.MoveNodeRequest
19, // 17: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest
20, // 18: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
21, // 19: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
22, // 20: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
23, // 21: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest
24, // 22: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest
25, // 23: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest
0, // 24: headscale.v1.HeadscaleService.Health:input_type -> headscale.v1.HealthRequest
26, // 25: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse
27, // 26: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse
28, // 27: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse
29, // 28: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse
30, // 29: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
31, // 30: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
32, // 31: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
33, // 32: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse
34, // 33: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse
35, // 34: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse
36, // 35: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse
37, // 36: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse
38, // 37: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse
39, // 38: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse
40, // 39: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse
41, // 40: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse
42, // 41: headscale.v1.HeadscaleService.MoveNode:output_type -> headscale.v1.MoveNodeResponse
43, // 42: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse
44, // 43: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
45, // 44: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
46, // 45: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
47, // 46: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse
48, // 47: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse
49, // 48: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse
1, // 49: headscale.v1.HeadscaleService.Health:output_type -> headscale.v1.HealthResponse
25, // [25:50] is the sub-list for method output_type
0, // [0:25] is the sub-list for method input_type
8, // 6: headscale.v1.HeadscaleService.DeletePreAuthKey:input_type -> headscale.v1.DeletePreAuthKeyRequest
9, // 7: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest
10, // 8: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest
11, // 9: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest
12, // 10: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest
13, // 11: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest
14, // 12: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest
15, // 13: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest
16, // 14: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest
17, // 15: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest
18, // 16: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest
19, // 17: headscale.v1.HeadscaleService.MoveNode:input_type -> headscale.v1.MoveNodeRequest
20, // 18: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest
21, // 19: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
22, // 20: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
23, // 21: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
24, // 22: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest
25, // 23: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest
26, // 24: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest
0, // 25: headscale.v1.HeadscaleService.Health:input_type -> headscale.v1.HealthRequest
27, // 26: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse
28, // 27: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse
29, // 28: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse
30, // 29: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse
31, // 30: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
32, // 31: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
33, // 32: headscale.v1.HeadscaleService.DeletePreAuthKey:output_type -> headscale.v1.DeletePreAuthKeyResponse
34, // 33: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
35, // 34: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse
36, // 35: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse
37, // 36: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse
38, // 37: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse
39, // 38: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse
40, // 39: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse
41, // 40: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse
42, // 41: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse
43, // 42: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse
44, // 43: headscale.v1.HeadscaleService.MoveNode:output_type -> headscale.v1.MoveNodeResponse
45, // 44: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse
46, // 45: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
47, // 46: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
48, // 47: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
49, // 48: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse
50, // 49: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse
51, // 50: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse
1, // 51: headscale.v1.HeadscaleService.Health:output_type -> headscale.v1.HealthResponse
26, // [26:52] is the sub-list for method output_type
0, // [0:26] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name

View File

@@ -227,6 +227,38 @@ func local_request_HeadscaleService_ExpirePreAuthKey_0(ctx context.Context, mars
return msg, metadata, err
}
var filter_HeadscaleService_DeletePreAuthKey_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.DeletePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.DeletePreAuthKey(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_ListPreAuthKeys_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_ListPreAuthKeys_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
@@ -967,6 +999,26 @@ func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.Ser
}
forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
@@ -1489,6 +1541,23 @@ func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.Ser
}
forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
@@ -1822,6 +1891,7 @@ var (
pattern_HeadscaleService_ListUsers_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "user"}, ""))
pattern_HeadscaleService_CreatePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_ExpirePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "preauthkey", "expire"}, ""))
pattern_HeadscaleService_DeletePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_ListPreAuthKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_DebugCreateNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "debug", "node"}, ""))
pattern_HeadscaleService_GetNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "node", "node_id"}, ""))
@@ -1850,6 +1920,7 @@ var (
forward_HeadscaleService_ListUsers_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_CreatePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ExpirePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DeletePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListPreAuthKeys_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DebugCreateNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_GetNode_0 = runtime.ForwardResponseMessage

View File

@@ -25,6 +25,7 @@ const (
HeadscaleService_ListUsers_FullMethodName = "/headscale.v1.HeadscaleService/ListUsers"
HeadscaleService_CreatePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/CreatePreAuthKey"
HeadscaleService_ExpirePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpirePreAuthKey"
HeadscaleService_DeletePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/DeletePreAuthKey"
HeadscaleService_ListPreAuthKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListPreAuthKeys"
HeadscaleService_DebugCreateNode_FullMethodName = "/headscale.v1.HeadscaleService/DebugCreateNode"
HeadscaleService_GetNode_FullMethodName = "/headscale.v1.HeadscaleService/GetNode"
@@ -58,6 +59,7 @@ type HeadscaleServiceClient interface {
// --- PreAuthKeys start ---
CreatePreAuthKey(ctx context.Context, in *CreatePreAuthKeyRequest, opts ...grpc.CallOption) (*CreatePreAuthKeyResponse, error)
ExpirePreAuthKey(ctx context.Context, in *ExpirePreAuthKeyRequest, opts ...grpc.CallOption) (*ExpirePreAuthKeyResponse, error)
DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error)
ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error)
// --- Node start ---
DebugCreateNode(ctx context.Context, in *DebugCreateNodeRequest, opts ...grpc.CallOption) (*DebugCreateNodeResponse, error)
@@ -151,6 +153,16 @@ func (c *headscaleServiceClient) ExpirePreAuthKey(ctx context.Context, in *Expir
return out, nil
}
func (c *headscaleServiceClient) DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeletePreAuthKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DeletePreAuthKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListPreAuthKeysResponse)
@@ -353,6 +365,7 @@ type HeadscaleServiceServer interface {
// --- PreAuthKeys start ---
CreatePreAuthKey(context.Context, *CreatePreAuthKeyRequest) (*CreatePreAuthKeyResponse, error)
ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error)
DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error)
ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error)
// --- Node start ---
DebugCreateNode(context.Context, *DebugCreateNodeRequest) (*DebugCreateNodeResponse, error)
@@ -404,6 +417,9 @@ func (UnimplementedHeadscaleServiceServer) CreatePreAuthKey(context.Context, *Cr
func (UnimplementedHeadscaleServiceServer) ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ExpirePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeletePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListPreAuthKeys not implemented")
}
@@ -590,6 +606,24 @@ func _HeadscaleService_ExpirePreAuthKey_Handler(srv interface{}, ctx context.Con
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DeletePreAuthKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeletePreAuthKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DeletePreAuthKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, req.(*DeletePreAuthKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ListPreAuthKeys_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListPreAuthKeysRequest)
if err := dec(in); err != nil {
@@ -963,6 +997,10 @@ var HeadscaleService_ServiceDesc = grpc.ServiceDesc{
MethodName: "ExpirePreAuthKey",
Handler: _HeadscaleService_ExpirePreAuthKey_Handler,
},
{
MethodName: "DeletePreAuthKey",
Handler: _HeadscaleService_DeletePreAuthKey_Handler,
},
{
MethodName: "ListPreAuthKeys",
Handler: _HeadscaleService_ListPreAuthKeys_Handler,

View File

@@ -338,6 +338,94 @@ func (*ExpirePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{4}
}
type DeletePreAuthKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"`
Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePreAuthKeyRequest) Reset() {
*x = DeletePreAuthKeyRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePreAuthKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePreAuthKeyRequest) ProtoMessage() {}
func (x *DeletePreAuthKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePreAuthKeyRequest.ProtoReflect.Descriptor instead.
func (*DeletePreAuthKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5}
}
func (x *DeletePreAuthKeyRequest) GetUser() uint64 {
if x != nil {
return x.User
}
return 0
}
func (x *DeletePreAuthKeyRequest) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
type DeletePreAuthKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePreAuthKeyResponse) Reset() {
*x = DeletePreAuthKeyResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePreAuthKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePreAuthKeyResponse) ProtoMessage() {}
func (x *DeletePreAuthKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePreAuthKeyResponse.ProtoReflect.Descriptor instead.
func (*DeletePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6}
}
type ListPreAuthKeysRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"`
@@ -347,7 +435,7 @@ type ListPreAuthKeysRequest struct {
func (x *ListPreAuthKeysRequest) Reset() {
*x = ListPreAuthKeysRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -359,7 +447,7 @@ func (x *ListPreAuthKeysRequest) String() string {
func (*ListPreAuthKeysRequest) ProtoMessage() {}
func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -372,7 +460,7 @@ func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use ListPreAuthKeysRequest.ProtoReflect.Descriptor instead.
func (*ListPreAuthKeysRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5}
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{7}
}
func (x *ListPreAuthKeysRequest) GetUser() uint64 {
@@ -391,7 +479,7 @@ type ListPreAuthKeysResponse struct {
func (x *ListPreAuthKeysResponse) Reset() {
*x = ListPreAuthKeysResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -403,7 +491,7 @@ func (x *ListPreAuthKeysResponse) String() string {
func (*ListPreAuthKeysResponse) ProtoMessage() {}
func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -416,7 +504,7 @@ func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message {
// Deprecated: Use ListPreAuthKeysResponse.ProtoReflect.Descriptor instead.
func (*ListPreAuthKeysResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6}
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{8}
}
func (x *ListPreAuthKeysResponse) GetPreAuthKeys() []*PreAuthKey {
@@ -459,7 +547,11 @@ const file_headscale_v1_preauthkey_proto_rawDesc = "" +
"\x17ExpirePreAuthKeyRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\x12\x10\n" +
"\x03key\x18\x02 \x01(\tR\x03key\"\x1a\n" +
"\x18ExpirePreAuthKeyResponse\",\n" +
"\x18ExpirePreAuthKeyResponse\"?\n" +
"\x17DeletePreAuthKeyRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\x12\x10\n" +
"\x03key\x18\x02 \x01(\tR\x03key\"\x1a\n" +
"\x18DeletePreAuthKeyResponse\",\n" +
"\x16ListPreAuthKeysRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\"W\n" +
"\x17ListPreAuthKeysResponse\x12<\n" +
@@ -477,30 +569,32 @@ func file_headscale_v1_preauthkey_proto_rawDescGZIP() []byte {
return file_headscale_v1_preauthkey_proto_rawDescData
}
var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 7)
var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_headscale_v1_preauthkey_proto_goTypes = []any{
(*PreAuthKey)(nil), // 0: headscale.v1.PreAuthKey
(*CreatePreAuthKeyRequest)(nil), // 1: headscale.v1.CreatePreAuthKeyRequest
(*CreatePreAuthKeyResponse)(nil), // 2: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyRequest)(nil), // 3: headscale.v1.ExpirePreAuthKeyRequest
(*ExpirePreAuthKeyResponse)(nil), // 4: headscale.v1.ExpirePreAuthKeyResponse
(*ListPreAuthKeysRequest)(nil), // 5: headscale.v1.ListPreAuthKeysRequest
(*ListPreAuthKeysResponse)(nil), // 6: headscale.v1.ListPreAuthKeysResponse
(*User)(nil), // 7: headscale.v1.User
(*timestamppb.Timestamp)(nil), // 8: google.protobuf.Timestamp
(*DeletePreAuthKeyRequest)(nil), // 5: headscale.v1.DeletePreAuthKeyRequest
(*DeletePreAuthKeyResponse)(nil), // 6: headscale.v1.DeletePreAuthKeyResponse
(*ListPreAuthKeysRequest)(nil), // 7: headscale.v1.ListPreAuthKeysRequest
(*ListPreAuthKeysResponse)(nil), // 8: headscale.v1.ListPreAuthKeysResponse
(*User)(nil), // 9: headscale.v1.User
(*timestamppb.Timestamp)(nil), // 10: google.protobuf.Timestamp
}
var file_headscale_v1_preauthkey_proto_depIdxs = []int32{
7, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User
8, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp
8, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp
8, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey
0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey
6, // [6:6] is the sub-list for method output_type
6, // [6:6] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
9, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User
10, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp
10, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp
10, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey
0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey
6, // [6:6] is the sub-list for method output_type
6, // [6:6] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
}
func init() { file_headscale_v1_preauthkey_proto_init() }
@@ -515,7 +609,7 @@ func file_headscale_v1_preauthkey_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_preauthkey_proto_rawDesc), len(file_headscale_v1_preauthkey_proto_rawDesc)),
NumEnums: 0,
NumMessages: 7,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -618,6 +618,41 @@
"HeadscaleService"
]
},
"delete": {
"operationId": "HeadscaleService_DeletePreAuthKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DeletePreAuthKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "user",
"in": "query",
"required": false,
"type": "string",
"format": "uint64"
},
{
"name": "key",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
},
"post": {
"summary": "--- PreAuthKeys start ---",
"operationId": "HeadscaleService_CreatePreAuthKey",
@@ -1029,6 +1064,9 @@
"v1DeleteNodeResponse": {
"type": "object"
},
"v1DeletePreAuthKeyResponse": {
"type": "object"
},
"v1DeleteUserResponse": {
"type": "object"
},

2
go.mod
View File

@@ -182,7 +182,7 @@ require (
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/opencontainers/runc v1.3.3 // indirect
github.com/opencontainers/runc v1.3.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490 // indirect
github.com/pkg/errors v0.9.1 // indirect

10
go.sum
View File

@@ -124,6 +124,8 @@ github.com/creachadair/command v0.2.0 h1:qTA9cMMhZePAxFoNdnk6F6nn94s1qPndIg9hJbq
github.com/creachadair/command v0.2.0/go.mod h1:j+Ar+uYnFsHpkMeV9kGj6lJ45y9u2xqtg8FYy6cm+0o=
github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE=
github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8=
github.com/creachadair/mds v0.25.2 h1:xc0S0AfDq5GX9KUR5sLvi5XjA61/P6S5e0xFs1vA18Q=
github.com/creachadair/mds v0.25.2/go.mod h1:+s4CFteFRj4eq2KcGHW8Wei3u9NyzSPzNV32EvjyK/Q=
github.com/creachadair/mds v0.25.10 h1:9k9JB35D1xhOCFl0liBhagBBp8fWWkKZrA7UXsfoHtA=
github.com/creachadair/mds v0.25.10/go.mod h1:4hatI3hRM+qhzuAmqPRFvaBM8mONkS7nsLxkcuTYUIs=
github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc=
@@ -276,6 +278,8 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfC
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jsimonetti/rtnetlink v1.4.1 h1:JfD4jthWBqZMEffc5RjgmlzpYttAVw1sdnmiNaPO3hE=
github.com/jsimonetti/rtnetlink v1.4.1/go.mod h1:xJjT7t59UIZ62GLZbv6PLLo8VFrostJMPBAheR6OM8w=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co=
github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
@@ -350,8 +354,8 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/opencontainers/runc v1.3.3 h1:qlmBbbhu+yY0QM7jqfuat7M1H3/iXjju3VkP9lkFQr4=
github.com/opencontainers/runc v1.3.3/go.mod h1:D7rL72gfWxVs9cJ2/AayxB0Hlvn9g0gaF1R7uunumSI=
github.com/opencontainers/runc v1.3.2 h1:GUwgo0Fx9M/pl2utaSYlJfdBcXAB/CZXDxe322lvJ3Y=
github.com/opencontainers/runc v1.3.2/go.mod h1:F7UQQEsxcjUNnFpT1qPLHZBKYP7yWwk6hq8suLy9cl0=
github.com/orisano/pixelmatch v0.0.0-20220722002657-fb0b55479cde/go.mod h1:nZgzbfBr3hhjoZnS66nKrHmduYNpc34ny7RK4z5/HM0=
github.com/ory/dockertest/v3 v3.12.0 h1:3oV9d0sDzlSQfHtIaB5k6ghUCVMVLpAY8hwrqoCyRCw=
github.com/ory/dockertest/v3 v3.12.0/go.mod h1:aKNDTva3cp8dwOWwb9cWuX84aH5akkxXRvO7KCwWVjE=
@@ -459,6 +463,8 @@ github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+y
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc=
github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d h1:mnqtPWYyvNiPU9l9tzO2YbHXU/xV664XthZYA26lOiE=
github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d/go.mod h1:9BzmlFc3OLqLzLTF/5AY+BMs+clxMqyhSGzgXIm8mNI=
github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694 h1:95eIP97c88cqAFU/8nURjgI9xxPbD+Ci6mY/a79BI/w=
github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694/go.mod h1:veguaG8tVg1H/JG5RfpoUW41I+O8ClPElo/fTYr8mMk=
github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993 h1:FyiiAvDAxpB0DrW2GW3KOVfi3YFOtsQUEeFWbf55JJU=
github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4=
github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97 h1:JJkDnrAhHvOCttk8z9xeZzcDlzzkRA7+Duxj9cwOyxk=

View File

@@ -11,7 +11,6 @@ import (
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
@@ -71,6 +70,13 @@ func (h *Headscale) handleRegister(
// We do not look up nodes by [key.MachinePublic] as it might belong to multiple
// nodes, separated by users and this path is handling expiring/logout paths.
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
// When tailscaled restarts, it sends RegisterRequest with Auth=nil and Expiry=zero.
// Return the current node state without modification.
// See: https://github.com/juanfont/headscale/issues/2862
if req.Expiry.IsZero() && node.Expiry().Valid() && !node.IsExpired() {
return nodeToRegisterResponse(node), nil
}
resp, err := h.handleLogout(node, req, machineKey)
if err != nil {
return nil, fmt.Errorf("handling existing node: %w", err)
@@ -173,6 +179,7 @@ func (h *Headscale) handleLogout(
}
// If the request expiry is in the past, we consider it a logout.
// Zero expiry is handled in handleRegister() before calling this function.
if req.Expiry.Before(time.Now()) {
log.Debug().
Uint64("node.id", node.ID().Uint64()).
@@ -356,16 +363,13 @@ func (h *Headscale) handleRegisterWithAuthKey(
// eventbus.
// TODO(kradalby): This needs to be ran as part of the batcher maybe?
// now since we dont update the node/pol here anymore
routeChange := h.state.AutoApproveRoutes(node)
if _, _, err := h.state.SaveNode(node); err != nil {
return nil, fmt.Errorf("saving auto approved routes to node: %w", err)
routesChange, err := h.state.AutoApproveRoutes(node)
if err != nil {
return nil, fmt.Errorf("auto approving routes: %w", err)
}
if routeChange && changed.Empty() {
changed = change.NodeAdded(node.ID())
}
h.Change(changed)
// Send both changes. Empty changes are ignored by Change().
h.Change(changed, routesChange)
// TODO(kradalby): I think this is covered above, but we need to validate that.
// // If policy changed due to node registration, send a separate policy change

View File

@@ -3004,3 +3004,385 @@ func createTestApp(t *testing.T) *Headscale {
return app
}
// TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey tests the scenario reported in
// https://github.com/juanfont/headscale/issues/2830
//
// Scenario:
// 1. Node registers successfully with a single-use pre-auth key
// 2. Node is running fine
// 3. Node restarts (e.g., after headscale upgrade or tailscale container restart)
// 4. Node sends RegisterRequest with the same pre-auth key
// 5. BUG: Headscale rejects the request with "authkey expired" or "authkey already used"
//
// Expected behavior:
// When an existing node (identified by matching NodeKey + MachineKey) re-registers
// with a pre-auth key that it previously used, the registration should succeed.
// The node is not creating a new registration - it's re-authenticating the same device.
func TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create user and single-use pre-auth key
user := app.state.CreateUserForTest("test-user")
pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) // reusable=false
require.NoError(t, err)
require.False(t, pak.Reusable, "key should be single-use for this test")
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// STEP 1: Initial registration with pre-auth key (simulates fresh node joining)
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
t.Log("Step 1: Initial registration with pre-auth key")
initialResp, err := app.handleRegister(context.Background(), initialReq, machineKey.Public())
require.NoError(t, err, "initial registration should succeed")
require.NotNil(t, initialResp)
assert.True(t, initialResp.MachineAuthorized, "node should be authorized")
assert.False(t, initialResp.NodeKeyExpired, "node key should not be expired")
// Verify node was created in database
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found, "node should exist after initial registration")
assert.Equal(t, "test-node", node.Hostname())
assert.Equal(t, nodeKey.Public(), node.NodeKey())
assert.Equal(t, machineKey.Public(), node.MachineKey())
// Verify pre-auth key is now marked as used
usedPak, err := app.state.GetPreAuthKey(pak.Key)
require.NoError(t, err)
assert.True(t, usedPak.Used, "pre-auth key should be marked as used after initial registration")
// STEP 2: Simulate node restart - node sends RegisterRequest again with same pre-auth key
// This happens when:
// - Tailscale container restarts
// - Tailscaled service restarts
// - System reboots
// The Tailscale client persists the pre-auth key in its state and sends it on every registration
t.Log("Step 2: Node restart - re-registration with same (now used) pre-auth key")
restartReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Same key, now marked as Used=true
},
NodeKey: nodeKey.Public(), // Same node key
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
// BUG: This fails with "authkey already used" or "authkey expired"
// EXPECTED: Should succeed because it's the same node re-registering
restartResp, err := app.handleRegister(context.Background(), restartReq, machineKey.Public())
// This is the assertion that currently FAILS in v0.27.0
assert.NoError(t, err, "BUG: existing node re-registration with its own used pre-auth key should succeed")
if err != nil {
t.Logf("Error received (this is the bug): %v", err)
t.Logf("Expected behavior: Node should be able to re-register with the same pre-auth key it used initially")
return // Stop here to show the bug clearly
}
require.NotNil(t, restartResp)
assert.True(t, restartResp.MachineAuthorized, "node should remain authorized after restart")
assert.False(t, restartResp.NodeKeyExpired, "node key should not be expired after restart")
// Verify it's the same node (not a duplicate)
nodeAfterRestart, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found, "node should still exist after restart")
assert.Equal(t, node.ID(), nodeAfterRestart.ID(), "should be the same node, not a new one")
assert.Equal(t, "test-node", nodeAfterRestart.Hostname())
}
// TestNodeReregistrationWithReusablePreAuthKey tests that reusable keys work correctly
// for node re-registration.
func TestNodeReregistrationWithReusablePreAuthKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("test-user")
pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) // reusable=true
require.NoError(t, err)
require.True(t, pak.Reusable)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Initial registration
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reusable-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
initialResp, err := app.handleRegister(context.Background(), initialReq, machineKey.Public())
require.NoError(t, err)
require.NotNil(t, initialResp)
assert.True(t, initialResp.MachineAuthorized)
// Node restart - re-registration with reusable key
restartReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Reusable key
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reusable-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
restartResp, err := app.handleRegister(context.Background(), restartReq, machineKey.Public())
require.NoError(t, err, "reusable key should allow re-registration")
require.NotNil(t, restartResp)
assert.True(t, restartResp.MachineAuthorized)
assert.False(t, restartResp.NodeKeyExpired)
}
// TestNodeReregistrationWithExpiredPreAuthKey tests that truly expired keys
// are still rejected even for existing nodes.
func TestNodeReregistrationWithExpiredPreAuthKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("test-user")
expiry := time.Now().Add(-1 * time.Hour) // Already expired
pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, &expiry, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Try to register with expired key
req := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "expired-key-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegister(context.Background(), req, machineKey.Public())
assert.Error(t, err, "expired pre-auth key should be rejected")
assert.Contains(t, err.Error(), "authkey expired", "error should mention key expiration")
}
// TestIssue2830_ExistingNodeReregistersWithExpiredKey tests the fix for issue #2830.
// When a node is already registered and the pre-auth key expires, the node should
// still be able to re-register (e.g., after a container restart) using the same
// expired key. The key was only needed for initial authentication.
func TestIssue2830_ExistingNodeReregistersWithExpiredKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("test-user")
// Create a valid key (will expire it later)
expiry := time.Now().Add(1 * time.Hour)
pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, &expiry, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Register the node initially (key is still valid)
req := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue2830-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegister(context.Background(), req, machineKey.Public())
require.NoError(t, err, "initial registration should succeed")
require.NotNil(t, resp)
require.True(t, resp.MachineAuthorized, "node should be authorized after initial registration")
// Verify node was created
allNodes := app.state.ListNodes()
require.Equal(t, 1, allNodes.Len())
initialNodeID := allNodes.At(0).ID()
// Now expire the key by updating it in the database to have an expiry in the past.
// This simulates the real-world scenario where a key expires after initial registration.
pastExpiry := time.Now().Add(-1 * time.Hour)
err = app.state.DB().DB.Model(&types.PreAuthKey{}).
Where("id = ?", pak.ID).
Update("expiration", pastExpiry).Error
require.NoError(t, err, "should be able to update key expiration")
// Reload the key to verify it's now expired
expiredPak, err := app.state.GetPreAuthKey(pak.Key)
require.NoError(t, err)
require.NotNil(t, expiredPak.Expiration)
require.True(t, expiredPak.Expiration.Before(time.Now()), "key should be expired")
// Verify the expired key would fail validation
err = expiredPak.Validate()
require.Error(t, err, "key should fail validation when expired")
require.Contains(t, err.Error(), "authkey expired")
// Attempt to re-register with the SAME key (now expired).
// This should SUCCEED because:
// - The node already exists with the same MachineKey and User
// - The fix allows existing nodes to re-register even with expired keys
// - The key was only needed for initial authentication
req2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Same key as initial registration (now expired)
},
NodeKey: nodeKey.Public(), // Same NodeKey as initial registration
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue2830-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegister(context.Background(), req2, machineKey.Public())
assert.NoError(t, err, "re-registration should succeed even with expired key for existing node")
assert.NotNil(t, resp2)
assert.True(t, resp2.MachineAuthorized, "node should remain authorized after re-registration")
// Verify we still have only one node (re-registered, not created new)
allNodes = app.state.ListNodes()
require.Equal(t, 1, allNodes.Len(), "should have exactly one node (re-registered)")
assert.Equal(t, initialNodeID, allNodes.At(0).ID(), "node ID should not change on re-registration")
}
// TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey tests that an existing node
// can re-register using a pre-auth key that's already marked as Used=true, as long as:
// 1. The node is re-registering with the same MachineKey it originally used
// 2. The node is using the same pre-auth key it was originally registered with (AuthKeyID matches)
//
// This is the fix for GitHub issue #2830: https://github.com/juanfont/headscale/issues/2830
//
// Background: When Docker/Kubernetes containers restart, they keep their persistent state
// (including the MachineKey), but container entrypoints unconditionally run:
//
// tailscale up --authkey=$TS_AUTHKEY
//
// This caused nodes to be rejected after restart because the pre-auth key was already
// marked as Used=true from the initial registration. The fix allows re-registration of
// existing nodes with their own used keys.
func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing.T) {
app := createTestApp(t)
// Create a user
user := app.state.CreateUserForTest("testuser")
// Create a SINGLE-USE pre-auth key (reusable=false)
// This is the type of key that triggers the bug in issue #2830
preAuthKey, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
require.NoError(t, err)
require.False(t, preAuthKey.Reusable, "Pre-auth key must be single-use to test issue #2830")
require.False(t, preAuthKey.Used, "Pre-auth key should not be used yet")
// Generate node keys for the client
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Step 1: Initial registration with the pre-auth key
// This simulates the first time the container starts and runs 'tailscale up --authkey=...'
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: preAuthKey.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue-2830-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
initialResp, err := app.handleRegisterWithAuthKey(initialReq, machineKey.Public())
require.NoError(t, err, "Initial registration should succeed")
require.True(t, initialResp.MachineAuthorized, "Node should be authorized after initial registration")
require.NotNil(t, initialResp.User, "User should be set in response")
require.Equal(t, "testuser", initialResp.User.DisplayName, "User should match the pre-auth key's user")
// Verify the pre-auth key is now marked as Used
updatedKey, err := app.state.GetPreAuthKey(preAuthKey.Key)
require.NoError(t, err)
require.True(t, updatedKey.Used, "Pre-auth key should be marked as Used after initial registration")
// Step 2: Container restart scenario
// The container keeps its MachineKey (persistent state), but the entrypoint script
// unconditionally runs 'tailscale up --authkey=$TS_AUTHKEY' again
//
// WITHOUT THE FIX: This would fail with "authkey already used" error
// WITH THE FIX: This succeeds because it's the same node re-registering with its own key
// Simulate sending the same RegisterRequest again (same MachineKey, same AuthKey)
// This is exactly what happens when a container restarts
reregisterReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: preAuthKey.Key, // Same key, now marked as Used=true
},
NodeKey: nodeKey.Public(), // Same NodeKey
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue-2830-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
reregisterResp, err := app.handleRegisterWithAuthKey(reregisterReq, machineKey.Public()) // Same MachineKey
require.NoError(t, err, "Re-registration with same MachineKey and used pre-auth key should succeed (fixes #2830)")
require.True(t, reregisterResp.MachineAuthorized, "Node should remain authorized after re-registration")
require.NotNil(t, reregisterResp.User, "User should be set in re-registration response")
require.Equal(t, "testuser", reregisterResp.User.DisplayName, "User should remain the same")
// Verify that only ONE node was created (not a duplicate)
nodes := app.state.ListNodesByUser(types.UserID(user.ID))
require.Equal(t, 1, nodes.Len(), "Should have exactly one node (no duplicates created)")
require.Equal(t, "issue-2830-test-node", nodes.At(0).Hostname(), "Node hostname should match")
// Step 3: Verify that a DIFFERENT machine cannot use the same used key
// This ensures we didn't break the security model - only the original node can re-register
differentMachineKey := key.NewMachine()
differentNodeKey := key.NewNode()
attackReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: preAuthKey.Key, // Try to use the same key
},
NodeKey: differentNodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "attacker-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(attackReq, differentMachineKey.Public())
require.Error(t, err, "Different machine should NOT be able to use the same used pre-auth key")
require.Contains(t, err.Error(), "already used", "Error should indicate key is already used")
// Verify still only one node (the original one)
nodesAfterAttack := app.state.ListNodesByUser(types.UserID(user.ID))
require.Equal(t, 1, nodesAfterAttack.Len(), "Should still have exactly one node (attack prevented)")
}

View File

@@ -952,6 +952,41 @@ AND auth_key_id NOT IN (
return nil
},
},
{
// Drop all indices that are no longer in use and has existed.
// They potentially still present from broken migrations in the past.
// They should all be cleaned up by the db engine, but we are a bit
// conservative to ensure all our previous mess is cleaned up.
ID: "202511101554-drop-old-idx",
Migrate: func(tx *gorm.DB) error {
for _, oldIdx := range []struct{ name, table string }{
{"idx_namespaces_deleted_at", "namespaces"},
{"idx_routes_deleted_at", "routes"},
{"idx_shared_machines_deleted_at", "shared_machines"},
} {
err := tx.Migrator().DropIndex(oldIdx.table, oldIdx.name)
if err != nil {
log.Trace().
Str("index", oldIdx.name).
Str("table", oldIdx.table).
Err(err).
Msg("Error dropping old index, continuing...")
}
}
return nil
},
Rollback: func(tx *gorm.DB) error {
return nil
},
},
// Migrations **above** this points will be REMOVED in version **0.29.0**
// This is to clean up a lot of old migrations that is seldom used
// and carries a lot of technical debt.
// Any new migrations should be added after the comment below and follow
// the rules it sets out.
// From this point, the following rules must be followed:
// - NEVER use gorm.AutoMigrate, write the exact migration steps needed
// - AutoMigrate depends on the struct staying exactly the same, which it won't over time.

View File

@@ -325,7 +325,11 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
}
if changed {
err := tx.Save(node).Error
// Use Updates() with Select() to only update IP fields, avoiding overwriting
// other fields like Expiry. We need Select() because Updates() alone skips
// zero values, but we DO want to update IPv4/IPv6 to nil when removing them.
// See issue #2862.
err := tx.Model(node).Select("ipv4", "ipv6").Updates(node).Error
if err != nil {
return fmt.Errorf("saving node(%d) after adding IPs: %w", node.ID, err)
}

View File

@@ -452,13 +452,6 @@ func NodeSetMachineKey(
}).Error
}
// NodeSave saves a node object to the database, prefer to use a specific save method rather
// than this. It is intended to be used when we are changing or.
// TODO(kradalby): Remove this func, just use Save.
func NodeSave(tx *gorm.DB, node *types.Node) error {
return tx.Save(node).Error
}
func generateGivenName(suppliedName string, randomSuffix bool) (string, error) {
// Strip invalid DNS characters for givenName
suppliedName = strings.ToLower(suppliedName)

View File

@@ -126,9 +126,18 @@ func GetPreAuthKey(tx *gorm.DB, key string) (*types.PreAuthKey, error) {
}
// DestroyPreAuthKey destroys a preauthkey. Returns error if the PreAuthKey
// does not exist.
// does not exist. This also clears the auth_key_id on any nodes that reference
// this key.
func DestroyPreAuthKey(tx *gorm.DB, pak types.PreAuthKey) error {
return tx.Transaction(func(db *gorm.DB) error {
// First, clear the foreign key reference on any nodes using this key
if err := db.Model(&types.Node{}).
Where("auth_key_id = ?", pak.ID).
Update("auth_key_id", nil).Error; err != nil {
return fmt.Errorf("failed to clear auth_key_id on nodes: %w", err)
}
// Then delete the pre-auth key
if result := db.Unscoped().Delete(pak); result.Error != nil {
return result.Error
}
@@ -143,13 +152,20 @@ func (hsdb *HSDatabase) ExpirePreAuthKey(k *types.PreAuthKey) error {
})
}
func (hsdb *HSDatabase) DeletePreAuthKey(k *types.PreAuthKey) error {
return hsdb.Write(func(tx *gorm.DB) error {
return DestroyPreAuthKey(tx, *k)
})
}
// UsePreAuthKey marks a PreAuthKey as used.
func UsePreAuthKey(tx *gorm.DB, k *types.PreAuthKey) error {
k.Used = true
if err := tx.Save(k).Error; err != nil {
err := tx.Model(k).Update("used", true).Error
if err != nil {
return fmt.Errorf("failed to update key used status in the database: %w", err)
}
k.Used = true
return nil
}

View File

@@ -31,10 +31,15 @@ CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identif
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
-- Create all the old tables we have had and ensure they are clean up.
CREATE TABLE `namespaces` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `namespaces` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE TABLE `machines` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `kvs` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `shared_machines` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `shared_machines` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE TABLE `pre_auth_key_acl_tags` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `routes` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `routes` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`);
CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`);
CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`);
COMMIT;

View File

@@ -0,0 +1,134 @@
package db
import (
"database/sql"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
)
// TestUserUpdatePreservesUnchangedFields verifies that updating a user
// preserves fields that aren't modified. This test validates the fix
// for using Updates() instead of Save() in UpdateUser-like operations.
func TestUserUpdatePreservesUnchangedFields(t *testing.T) {
database := dbForTest(t)
// Create a user with all fields set
initialUser := types.User{
Name: "testuser",
DisplayName: "Test User Display",
Email: "test@example.com",
ProviderIdentifier: sql.NullString{
String: "provider-123",
Valid: true,
},
}
createdUser, err := database.CreateUser(initialUser)
require.NoError(t, err)
require.NotNil(t, createdUser)
// Verify initial state
assert.Equal(t, "testuser", createdUser.Name)
assert.Equal(t, "Test User Display", createdUser.DisplayName)
assert.Equal(t, "test@example.com", createdUser.Email)
assert.True(t, createdUser.ProviderIdentifier.Valid)
assert.Equal(t, "provider-123", createdUser.ProviderIdentifier.String)
// Simulate what UpdateUser does: load user, modify one field, save
_, err = Write(database.DB, func(tx *gorm.DB) (*types.User, error) {
user, err := GetUserByID(tx, types.UserID(createdUser.ID))
if err != nil {
return nil, err
}
// Modify ONLY DisplayName
user.DisplayName = "Updated Display Name"
// This is the line being tested - currently uses Save() which writes ALL fields, potentially overwriting unchanged ones
err = tx.Save(user).Error
if err != nil {
return nil, err
}
return user, nil
})
require.NoError(t, err)
// Read user back from database
updatedUser, err := Read(database.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByID(rx, types.UserID(createdUser.ID))
})
require.NoError(t, err)
// Verify that DisplayName was updated
assert.Equal(t, "Updated Display Name", updatedUser.DisplayName)
// CRITICAL: Verify that other fields were NOT overwritten
// With Save(), these assertions should pass because the user object
// was loaded from DB and has all fields populated.
// But if Updates() is used, these will also pass (and it's safer).
assert.Equal(t, "testuser", updatedUser.Name, "Name should be preserved")
assert.Equal(t, "test@example.com", updatedUser.Email, "Email should be preserved")
assert.True(t, updatedUser.ProviderIdentifier.Valid, "ProviderIdentifier should be preserved")
assert.Equal(t, "provider-123", updatedUser.ProviderIdentifier.String, "ProviderIdentifier value should be preserved")
}
// TestUserUpdateWithUpdatesMethod tests that using Updates() instead of Save()
// works correctly and only updates modified fields.
func TestUserUpdateWithUpdatesMethod(t *testing.T) {
database := dbForTest(t)
// Create a user
initialUser := types.User{
Name: "testuser",
DisplayName: "Original Display",
Email: "original@example.com",
ProviderIdentifier: sql.NullString{
String: "provider-abc",
Valid: true,
},
}
createdUser, err := database.CreateUser(initialUser)
require.NoError(t, err)
// Update using Updates() method
_, err = Write(database.DB, func(tx *gorm.DB) (*types.User, error) {
user, err := GetUserByID(tx, types.UserID(createdUser.ID))
if err != nil {
return nil, err
}
// Modify multiple fields
user.DisplayName = "New Display"
user.Email = "new@example.com"
// Use Updates() instead of Save()
err = tx.Updates(user).Error
if err != nil {
return nil, err
}
return user, nil
})
require.NoError(t, err)
// Verify changes
updatedUser, err := Read(database.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByID(rx, types.UserID(createdUser.ID))
})
require.NoError(t, err)
// Verify updated fields
assert.Equal(t, "New Display", updatedUser.DisplayName)
assert.Equal(t, "new@example.com", updatedUser.Email)
// Verify preserved fields
assert.Equal(t, "testuser", updatedUser.Name)
assert.True(t, updatedUser.ProviderIdentifier.Valid)
assert.Equal(t, "provider-abc", updatedUser.ProviderIdentifier.String)
}

View File

@@ -102,7 +102,8 @@ func RenameUser(tx *gorm.DB, uid types.UserID, newName string) error {
oldUser.Name = newName
if err := tx.Save(&oldUser).Error; err != nil {
err = tx.Updates(&oldUser).Error
if err != nil {
return err
}

View File

@@ -206,6 +206,27 @@ func (api headscaleV1APIServer) ExpirePreAuthKey(
return &v1.ExpirePreAuthKeyResponse{}, nil
}
func (api headscaleV1APIServer) DeletePreAuthKey(
ctx context.Context,
request *v1.DeletePreAuthKeyRequest,
) (*v1.DeletePreAuthKeyResponse, error) {
preAuthKey, err := api.h.state.GetPreAuthKey(request.Key)
if err != nil {
return nil, err
}
if uint64(preAuthKey.User.ID) != request.GetUser() {
return nil, fmt.Errorf("preauth key does not belong to user")
}
err = api.h.state.DeletePreAuthKey(preAuthKey)
if err != nil {
return nil, err
}
return &v1.DeletePreAuthKeyResponse{}, nil
}
func (api headscaleV1APIServer) ListPreAuthKeys(
ctx context.Context,
request *v1.ListPreAuthKeysRequest,
@@ -273,13 +294,13 @@ func (api headscaleV1APIServer) RegisterNode(
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
_ = api.h.state.AutoApproveRoutes(node)
_, _, err = api.h.state.SaveNode(node)
routeChange, err := api.h.state.AutoApproveRoutes(node)
if err != nil {
return nil, fmt.Errorf("saving auto approved routes to node: %w", err)
return nil, fmt.Errorf("auto approving routes: %w", err)
}
api.h.Change(nodeChange)
// Send both changes. Empty changes are ignored by Change().
api.h.Change(nodeChange, routeChange)
return &v1.RegisterNodeResponse{Node: node.Proto()}, nil
}

View File

@@ -42,10 +42,6 @@ var (
errOIDCAllowedUsers = errors.New(
"authenticated principal does not match any allowed user",
)
errOIDCInvalidNodeState = errors.New(
"requested node state key expired before authorisation completed",
)
errOIDCNodeKeyMissing = errors.New("could not get node key from cache")
)
// RegistrationInfo contains both machine key and verifier information for OIDC validation.
@@ -108,16 +104,8 @@ func (a *AuthProviderOIDC) AuthURL(registrationID types.RegistrationID) string {
registrationID.String())
}
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time {
if a.cfg.UseExpiryFromToken {
return idTokenExpiration
}
return time.Now().Add(a.cfg.Expiry)
}
// RegisterOIDC redirects to the OIDC provider for authentication
// Puts NodeKey in cache so the callback can retrieve it using the oidc state param
// RegisterHandler registers the OIDC callback handler with the given router.
// It puts NodeKey in cache so the callback can retrieve it using the oidc state param.
// Listens in /register/:registration_id.
func (a *AuthProviderOIDC) RegisterHandler(
writer http.ResponseWriter,
@@ -213,7 +201,8 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
return
}
cookieState, err := req.Cookie("state")
stateCookieName := getCookieName("state", state)
cookieState, err := req.Cookie(stateCookieName)
if err != nil {
httpError(writer, NewHTTPError(http.StatusBadRequest, "state not found", err))
return
@@ -235,8 +224,13 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
httpError(writer, err)
return
}
if idToken.Nonce == "" {
httpError(writer, NewHTTPError(http.StatusBadRequest, "nonce not found in IDToken", err))
return
}
nonce, err := req.Cookie("nonce")
nonceCookieName := getCookieName("nonce", idToken.Nonce)
nonce, err := req.Cookie(nonceCookieName)
if err != nil {
httpError(writer, NewHTTPError(http.StatusBadRequest, "nonce not found", err))
return
@@ -298,7 +292,7 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
return
}
user, c, err := a.createOrUpdateUserFromClaim(&claims)
user, _, err := a.createOrUpdateUserFromClaim(&claims)
if err != nil {
log.Error().
Err(err).
@@ -317,9 +311,6 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
return
}
// Send policy update notifications if needed
a.h.Change(c)
// TODO(kradalby): Is this comment right?
// If the node exists, then the node should be reauthenticated,
// if the node does not exist, and the machine key exists, then
@@ -366,6 +357,14 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", nil))
}
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time {
if a.cfg.UseExpiryFromToken {
return idTokenExpiration
}
return time.Now().Add(a.cfg.Expiry)
}
func extractCodeAndStateParamFromRequest(
req *http.Request,
) (string, string, error) {
@@ -498,8 +497,8 @@ func (a *AuthProviderOIDC) createOrUpdateUserFromClaim(
}
// if the user is still not found, create a new empty user.
// TODO(kradalby): This might cause us to not have an ID below which
// is a problem.
// TODO(kradalby): This context is not inherited from the request, which is probably not ideal.
// However, we need a context to use the OIDC provider.
if user == nil {
newUser = true
user = &types.User{}
@@ -551,18 +550,13 @@ func (a *AuthProviderOIDC) handleRegistration(
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
_ = a.h.state.AutoApproveRoutes(node)
_, policyChange, err := a.h.state.SaveNode(node)
routesChange, err := a.h.state.AutoApproveRoutes(node)
if err != nil {
return false, fmt.Errorf("saving auto approved routes to node: %w", err)
return false, fmt.Errorf("auto approving routes: %w", err)
}
// Policy updates are full and take precedence over node changes.
if !policyChange.Empty() {
a.h.Change(policyChange)
} else {
a.h.Change(nodeChange)
}
// Send both changes. Empty changes are ignored by Change().
a.h.Change(nodeChange, routesChange)
return !nodeChange.Empty(), nil
}
@@ -584,6 +578,11 @@ func renderOIDCCallbackTemplate(
return &content, nil
}
// getCookieName generates a unique cookie name based on a cookie value.
func getCookieName(baseName, value string) string {
return fmt.Sprintf("%s_%s", baseName, value[:6])
}
func setCSRFCookie(w http.ResponseWriter, r *http.Request, name string) (string, error) {
val, err := util.GenerateRandomStringURLSafe(64)
if err != nil {
@@ -592,7 +591,7 @@ func setCSRFCookie(w http.ResponseWriter, r *http.Request, name string) (string,
c := &http.Cookie{
Path: "/oidc/callback",
Name: name,
Name: getCookieName(name, val),
Value: val,
MaxAge: int(time.Hour.Seconds()),
Secure: r.TLS != nil,

View File

@@ -1353,6 +1353,55 @@ func TestSSHPolicyRules(t *testing.T) {
},
}},
},
{
name: "2863-allow-predefined-missing-users",
targetNode: taggedClient,
peers: types.Nodes{&nodeUser2},
policy: `{
"groups": {
"group:example-infra": [
"user2@",
"not-created-yet@",
],
},
"tagOwners": {
"tag:client": [
"user2@"
],
},
"ssh": [
// Allow infra to ssh to tag:example-infra server as debian
{
"action": "accept",
"src": [
"group:example-infra"
],
"dst": [
"tag:client",
],
"users": [
"debian",
],
},
],
}`,
wantSSH: &tailcfg.SSHPolicy{Rules: []*tailcfg.SSHRule{
{
Principals: []*tailcfg.SSHPrincipal{
{NodeIP: "100.64.0.2"},
},
SSHUsers: map[string]string{
"debian": "debian",
},
Action: &tailcfg.SSHAction{
Accept: true,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
AllowRemotePortForwarding: true,
},
},
}},
},
}
for _, tt := range tests {

View File

@@ -316,7 +316,6 @@ func (pol *Policy) compileSSHPolicy(
srcIPs, err := rule.Sources.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Caller().Err(err).Msgf("SSH policy compilation failed resolving source ips for rule %+v", rule)
continue // Skip this rule if we can't resolve sources
}
if srcIPs == nil || len(srcIPs.Prefixes()) == 0 {

View File

@@ -47,6 +47,14 @@ type PolicyManager struct {
usesAutogroupSelf bool
}
// filterAndPolicy combines the compiled filter rules with policy content for hashing.
// This ensures filterHash changes when policy changes, even for autogroup:self where
// the compiled filter is always empty.
type filterAndPolicy struct {
Filter []tailcfg.FilterRule
Policy *Policy
}
// NewPolicyManager creates a new PolicyManager from a policy file and a list of users and nodes.
// It returns an error if the policy file is invalid.
// The policy manager will update the filter rules based on the users and nodes.
@@ -77,14 +85,6 @@ func NewPolicyManager(b []byte, users []types.User, nodes views.Slice[types.Node
// updateLocked updates the filter rules based on the current policy and nodes.
// It must be called with the lock held.
func (pm *PolicyManager) updateLocked() (bool, error) {
// Clear the SSH policy map to ensure it's recalculated with the new policy.
// TODO(kradalby): This could potentially be optimized by only clearing the
// policies for nodes that have changed. Particularly if the only difference is
// that nodes has been added or removed.
clear(pm.sshPolicyMap)
clear(pm.compiledFilterRulesMap)
clear(pm.filterRulesMap)
// Check if policy uses autogroup:self
pm.usesAutogroupSelf = pm.pol.usesAutogroupSelf()
@@ -98,7 +98,14 @@ func (pm *PolicyManager) updateLocked() (bool, error) {
return false, fmt.Errorf("compiling filter rules: %w", err)
}
filterHash := deephash.Hash(&filter)
// Hash both the compiled filter AND the policy content together.
// This ensures filterHash changes when policy changes, even for autogroup:self
// where the compiled filter is always empty. This eliminates the need for
// a separate policyHash field.
filterHash := deephash.Hash(&filterAndPolicy{
Filter: filter,
Policy: pm.pol,
})
filterChanged := filterHash != pm.filterHash
if filterChanged {
log.Debug().
@@ -164,8 +171,27 @@ func (pm *PolicyManager) updateLocked() (bool, error) {
pm.exitSet = exitSet
pm.exitSetHash = exitSetHash
// If neither of the calculated values changed, no need to update nodes
if !filterChanged && !tagOwnerChanged && !autoApproveChanged && !exitSetChanged {
// Determine if we need to send updates to nodes
// filterChanged now includes policy content changes (via combined hash),
// so it will detect changes even for autogroup:self where compiled filter is empty
needsUpdate := filterChanged || tagOwnerChanged || autoApproveChanged || exitSetChanged
// Only clear caches if we're actually going to send updates
// This prevents clearing caches when nothing changed, which would leave nodes
// with stale filters until they reconnect. This is critical for autogroup:self
// where even reloading the same policy would clear caches but not send updates.
if needsUpdate {
// Clear the SSH policy map to ensure it's recalculated with the new policy.
// TODO(kradalby): This could potentially be optimized by only clearing the
// policies for nodes that have changed. Particularly if the only difference is
// that nodes has been added or removed.
clear(pm.sshPolicyMap)
clear(pm.compiledFilterRulesMap)
clear(pm.filterRulesMap)
}
// If nothing changed, no need to update nodes
if !needsUpdate {
log.Trace().
Msg("Policy evaluation detected no changes - all hashes match")
return false, nil
@@ -491,10 +517,16 @@ func (pm *PolicyManager) SetNodes(nodes views.Slice[types.NodeView]) (bool, erro
// For global policies: the filter must be recompiled to include the new nodes.
if nodesChanged {
// Recompile filter with the new node list
_, err := pm.updateLocked()
needsUpdate, err := pm.updateLocked()
if err != nil {
return false, err
}
if !needsUpdate {
// This ensures fresh filter rules are generated for all nodes
clear(pm.sshPolicyMap)
clear(pm.compiledFilterRulesMap)
clear(pm.filterRulesMap)
}
// Always return true when nodes changed, even if filter hash didn't change
// (can happen with autogroup:self or when nodes are added but don't affect rules)
return true, nil

View File

@@ -519,3 +519,89 @@ func TestAutogroupSelfWithOtherRules(t *testing.T) {
require.NoError(t, err)
require.NotEmpty(t, rules, "test-1 should have filter rules from both ACL rules")
}
// TestAutogroupSelfPolicyUpdateTriggersMapResponse verifies that when a policy with
// autogroup:self is updated, SetPolicy returns true to trigger MapResponse updates,
// even if the global filter hash didn't change (which is always empty for autogroup:self).
// This fixes the issue where policy updates would clear caches but not trigger updates,
// leaving nodes with stale filter rules until reconnect.
func TestAutogroupSelfPolicyUpdateTriggersMapResponse(t *testing.T) {
users := types.Users{
{Model: gorm.Model{ID: 1}, Name: "test-1", Email: "test-1@example.com"},
{Model: gorm.Model{ID: 2}, Name: "test-2", Email: "test-2@example.com"},
}
test1Node := &types.Node{
ID: 1,
Hostname: "test-1-device",
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[0],
UserID: users[0].ID,
Hostinfo: &tailcfg.Hostinfo{},
}
test2Node := &types.Node{
ID: 2,
Hostname: "test-2-device",
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[1],
UserID: users[1].ID,
Hostinfo: &tailcfg.Hostinfo{},
}
nodes := types.Nodes{test1Node, test2Node}
// Initial policy with autogroup:self
initialPolicy := `{
"acls": [
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
]
}`
pm, err := NewPolicyManager([]byte(initialPolicy), users, nodes.ViewSlice())
require.NoError(t, err)
require.True(t, pm.usesAutogroupSelf, "policy should use autogroup:self")
// Get initial filter rules for test-1 (should be cached)
rules1, err := pm.FilterForNode(test1Node.View())
require.NoError(t, err)
require.NotEmpty(t, rules1, "test-1 should have filter rules")
// Update policy with a different ACL that still results in empty global filter
// (only autogroup:self rules, which compile to empty global filter)
// We add a comment/description change by adding groups (which don't affect filter compilation)
updatedPolicy := `{
"groups": {
"group:test": ["test-1@example.com"]
},
"acls": [
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
]
}`
// SetPolicy should return true even though global filter hash didn't change
policyChanged, err := pm.SetPolicy([]byte(updatedPolicy))
require.NoError(t, err)
require.True(t, policyChanged, "SetPolicy should return true when policy content changes, even if global filter hash unchanged (autogroup:self)")
// Verify that caches were cleared and new rules are generated
// The cache should be empty, so FilterForNode will recompile
rules2, err := pm.FilterForNode(test1Node.View())
require.NoError(t, err)
require.NotEmpty(t, rules2, "test-1 should have filter rules after policy update")
// Verify that the policy hash tracking works - a second identical update should return false
policyChanged2, err := pm.SetPolicy([]byte(updatedPolicy))
require.NoError(t, err)
require.False(t, policyChanged2, "SetPolicy should return false when policy content hasn't changed")
}

View File

@@ -300,7 +300,9 @@ func (s *State) UpdateUser(userID types.UserID, updateFn func(*types.User) error
return nil, err
}
if err := tx.Save(user).Error; err != nil {
// Use Updates() to only update modified fields, preserving unchanged values.
err = tx.Updates(user).Error
if err != nil {
return nil, fmt.Errorf("updating user: %w", err)
}
@@ -386,7 +388,11 @@ func (s *State) persistNodeToDB(node types.NodeView) (types.NodeView, change.Cha
nodePtr := node.AsStruct()
if err := s.db.DB.Save(nodePtr).Error; err != nil {
// Use Omit("expiry") to prevent overwriting expiry during MapRequest updates.
// Expiry should only be updated through explicit SetNodeExpiry calls or re-registration.
// See: https://github.com/juanfont/headscale/issues/2862
err := s.db.DB.Omit("expiry").Updates(nodePtr).Error
if err != nil {
return types.NodeView{}, change.EmptySet, fmt.Errorf("saving node: %w", err)
}
@@ -821,7 +827,7 @@ func (s *State) SetPolicy(pol []byte) (bool, error) {
// AutoApproveRoutes checks if a node's routes should be auto-approved.
// AutoApproveRoutes checks if any routes should be auto-approved for a node and updates them.
func (s *State) AutoApproveRoutes(nv types.NodeView) bool {
func (s *State) AutoApproveRoutes(nv types.NodeView) (change.ChangeSet, error) {
approved, changed := policy.ApproveRoutesWithPolicy(s.polMan, nv, nv.ApprovedRoutes().AsSlice(), nv.AnnouncedRoutes())
if changed {
log.Debug().
@@ -834,7 +840,7 @@ func (s *State) AutoApproveRoutes(nv types.NodeView) bool {
// Persist the auto-approved routes to database and NodeStore via SetApprovedRoutes
// This ensures consistency between database and NodeStore
_, _, err := s.SetApprovedRoutes(nv.ID(), approved)
_, c, err := s.SetApprovedRoutes(nv.ID(), approved)
if err != nil {
log.Error().
Uint64("node.id", nv.ID().Uint64()).
@@ -842,13 +848,15 @@ func (s *State) AutoApproveRoutes(nv types.NodeView) bool {
Err(err).
Msg("Failed to persist auto-approved routes")
return false
return change.EmptySet, err
}
log.Info().Uint64("node.id", nv.ID().Uint64()).Str("node.name", nv.Hostname()).Strs("routes.approved", util.PrefixesToString(approved)).Msg("Routes approved")
return c, nil
}
return changed
return change.EmptySet, nil
}
// GetPolicy retrieves the current policy from the database.
@@ -964,6 +972,11 @@ func (s *State) ExpirePreAuthKey(preAuthKey *types.PreAuthKey) error {
return s.db.ExpirePreAuthKey(preAuthKey)
}
// DeletePreAuthKey permanently deletes a pre-authentication key.
func (s *State) DeletePreAuthKey(preAuthKey *types.PreAuthKey) error {
return s.db.DeletePreAuthKey(preAuthKey)
}
// GetRegistrationCacheEntry retrieves a node registration from cache.
func (s *State) GetRegistrationCacheEntry(id types.RegistrationID) (*types.RegisterNode, bool) {
entry, found := s.registrationCache.Get(id)
@@ -1187,9 +1200,10 @@ func (s *State) HandleNodeFromAuthPath(
return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", existingNodeSameUser.ID())
}
// Use the node from UpdateNode to save to database
_, err = hsdb.Write(s.db.DB, func(tx *gorm.DB) (*types.Node, error) {
if err := tx.Save(updatedNodeView.AsStruct()).Error; err != nil {
// Use Updates() to preserve fields not modified by UpdateNode.
err := tx.Updates(updatedNodeView.AsStruct()).Error
if err != nil {
return nil, fmt.Errorf("failed to save node: %w", err)
}
return nil, nil
@@ -1294,9 +1308,53 @@ func (s *State) HandleNodeFromPreAuthKey(
return types.NodeView{}, change.EmptySet, err
}
err = pak.Validate()
if err != nil {
return types.NodeView{}, change.EmptySet, err
// Check if node exists with same machine key before validating the key.
// For #2830: container restarts send the same pre-auth key which may be used/expired.
// Skip validation for existing nodes re-registering with the same NodeKey, as the
// key was only needed for initial authentication. NodeKey rotation requires validation.
existingNodeSameUser, existsSameUser := s.nodeStore.GetNodeByMachineKey(machineKey, types.UserID(pak.User.ID))
// For existing nodes, skip validation if:
// 1. MachineKey matches (cryptographic proof of machine identity)
// 2. User matches (from the PAK being used)
// 3. Not a NodeKey rotation (rotation requires fresh validation)
//
// Security: MachineKey is the cryptographic identity. If someone has the MachineKey,
// they control the machine. The PAK was only needed to authorize initial join.
// We don't check which specific PAK was used originally because:
// - Container restarts may use different PAKs (e.g., env var changed)
// - Original PAK may be deleted
// - MachineKey + User is sufficient to prove this is the same node
isExistingNodeReregistering := existsSameUser && existingNodeSameUser.Valid()
// Check if this is a NodeKey rotation (different NodeKey)
isNodeKeyRotation := existsSameUser && existingNodeSameUser.Valid() &&
existingNodeSameUser.NodeKey() != regReq.NodeKey
if isExistingNodeReregistering && !isNodeKeyRotation {
// Existing node re-registering with same NodeKey: skip validation.
// Pre-auth keys are only needed for initial authentication. Critical for
// containers that run "tailscale up --authkey=KEY" on every restart.
log.Debug().
Caller().
Uint64("node.id", existingNodeSameUser.ID().Uint64()).
Str("node.name", existingNodeSameUser.Hostname()).
Str("machine.key", machineKey.ShortString()).
Str("node.key.existing", existingNodeSameUser.NodeKey().ShortString()).
Str("node.key.request", regReq.NodeKey.ShortString()).
Uint64("authkey.id", pak.ID).
Bool("authkey.used", pak.Used).
Bool("authkey.expired", pak.Expiration != nil && pak.Expiration.Before(time.Now())).
Bool("authkey.reusable", pak.Reusable).
Bool("nodekey.rotation", isNodeKeyRotation).
Msg("Existing node re-registering with same NodeKey and auth key, skipping validation")
} else {
// New node or NodeKey rotation: require valid auth key.
err = pak.Validate()
if err != nil {
return types.NodeView{}, change.EmptySet, err
}
}
// Ensure we have a valid hostname - handle nil/empty cases
@@ -1328,9 +1386,6 @@ func (s *State) HandleNodeFromPreAuthKey(
var finalNode types.NodeView
// Check if node already exists with same machine key for this user
existingNodeSameUser, existsSameUser := s.nodeStore.GetNodeByMachineKey(machineKey, types.UserID(pak.User.ID))
// If this node exists for this user, update the node in place.
if existsSameUser && existingNodeSameUser.Valid() {
log.Trace().
@@ -1372,9 +1427,10 @@ func (s *State) HandleNodeFromPreAuthKey(
return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", existingNodeSameUser.ID())
}
// Use the node from UpdateNode to save to database
_, err = hsdb.Write(s.db.DB, func(tx *gorm.DB) (*types.Node, error) {
if err := tx.Save(updatedNodeView.AsStruct()).Error; err != nil {
// Use Updates() to preserve fields not modified by UpdateNode.
err := tx.Updates(updatedNodeView.AsStruct()).Error
if err != nil {
return nil, fmt.Errorf("failed to save node: %w", err)
}
@@ -1583,6 +1639,7 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest
var routeChange bool
var hostinfoChanged bool
var needsRouteApproval bool
var autoApprovedRoutes []netip.Prefix
// We need to ensure we update the node as it is in the NodeStore at
// the time of the request.
updatedNode, ok := s.nodeStore.UpdateNode(id, func(currentNode *types.Node) {
@@ -1607,7 +1664,6 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest
}
// Calculate route approval before NodeStore update to avoid calling View() inside callback
var autoApprovedRoutes []netip.Prefix
var hasNewRoutes bool
if hi := req.Hostinfo; hi != nil {
hasNewRoutes = len(hi.RoutableIPs) > 0
@@ -1673,7 +1729,6 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest
Strs("newApprovedRoutes", util.PrefixesToString(autoApprovedRoutes)).
Bool("routeChanged", routeChange).
Msg("applying route approval results")
currentNode.ApprovedRoutes = autoApprovedRoutes
}
}
})
@@ -1682,6 +1737,24 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest
return change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", id)
}
if routeChange {
log.Debug().
Uint64("node.id", id.Uint64()).
Strs("autoApprovedRoutes", util.PrefixesToString(autoApprovedRoutes)).
Msg("Persisting auto-approved routes from MapRequest")
// SetApprovedRoutes will update both database and PrimaryRoutes table
_, c, err := s.SetApprovedRoutes(id, autoApprovedRoutes)
if err != nil {
return change.EmptySet, fmt.Errorf("persisting auto-approved routes: %w", err)
}
// If SetApprovedRoutes resulted in a policy change, return it
if !c.Empty() {
return c, nil
}
} // Continue with the rest of the processing using the updated node
nodeRouteChange := change.EmptySet
// Handle route changes after NodeStore update
@@ -1696,13 +1769,8 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest
routesChangedButNotApproved = true
}
}
if routeChange {
needsRouteUpdate = true
log.Debug().
Caller().
Uint64("node.id", id.Uint64()).
Msg("updating routes because approved routes changed")
} else if routesChangedButNotApproved {
if routesChangedButNotApproved {
needsRouteUpdate = true
log.Debug().
Caller().
@@ -1739,25 +1807,26 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest
return change.NodeAdded(id), nil
}
func hostinfoEqual(oldNode types.NodeView, new *tailcfg.Hostinfo) bool {
if !oldNode.Valid() && new == nil {
func hostinfoEqual(oldNode types.NodeView, newHI *tailcfg.Hostinfo) bool {
if !oldNode.Valid() && newHI == nil {
return true
}
if !oldNode.Valid() || new == nil {
if !oldNode.Valid() || newHI == nil {
return false
}
old := oldNode.AsStruct().Hostinfo
return old.Equal(new)
return old.Equal(newHI)
}
func routesChanged(oldNode types.NodeView, new *tailcfg.Hostinfo) bool {
func routesChanged(oldNode types.NodeView, newHI *tailcfg.Hostinfo) bool {
var oldRoutes []netip.Prefix
if oldNode.Valid() && oldNode.AsStruct().Hostinfo != nil {
oldRoutes = oldNode.AsStruct().Hostinfo.RoutableIPs
}
newRoutes := new.RoutableIPs
newRoutes := newHI.RoutableIPs
if newRoutes == nil {
newRoutes = []netip.Prefix{}
}

View File

@@ -9,12 +9,15 @@ import (
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/tsic"
"github.com/samber/lo"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
"tailscale.com/types/ptr"
)
func TestAuthKeyLogoutAndReloginSameUser(t *testing.T) {
@@ -223,6 +226,7 @@ func TestAuthKeyLogoutAndReloginNewUser(t *testing.T) {
scenario, err := NewScenario(spec)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
err = scenario.CreateHeadscaleEnv([]tsic.Option{},
@@ -454,3 +458,267 @@ func TestAuthKeyLogoutAndReloginSameUserExpiredKey(t *testing.T) {
})
}
}
// TestAuthKeyDeleteKey tests Issue #2830: node with deleted auth key should still reconnect.
// Scenario from user report: "create node, delete the auth key, restart to validate it can connect"
// Steps:
// 1. Create node with auth key
// 2. DELETE the auth key from database (completely remove it)
// 3. Restart node - should successfully reconnect using MachineKey identity
func TestAuthKeyDeleteKey(t *testing.T) {
IntegrationSkip(t)
// Create scenario with NO nodes - we'll create the node manually so we can capture the auth key
scenario, err := NewScenario(ScenarioSpec{
NodesPerUser: 0, // No nodes created automatically
Users: []string{"user1"},
})
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
err = scenario.CreateHeadscaleEnv([]tsic.Option{}, hsic.WithTestName("delkey"), hsic.WithTLS(), hsic.WithDERPAsIP())
requireNoErrHeadscaleEnv(t, err)
headscale, err := scenario.Headscale()
requireNoErrGetHeadscale(t, err)
// Get the user
userMap, err := headscale.MapUsers()
require.NoError(t, err)
userID := userMap["user1"].GetId()
// Create a pre-auth key - we keep the full key string before it gets redacted
authKey, err := scenario.CreatePreAuthKey(userID, false, false)
require.NoError(t, err)
authKeyString := authKey.GetKey()
authKeyID := authKey.GetId()
t.Logf("Created pre-auth key ID %d: %s", authKeyID, authKeyString)
// Create a tailscale client and log it in with the auth key
client, err := scenario.CreateTailscaleNode(
"head",
tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
)
require.NoError(t, err)
err = client.Login(headscale.GetEndpoint(), authKeyString)
require.NoError(t, err)
// Wait for the node to be registered
var user1Nodes []*v1.Node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
var err error
user1Nodes, err = headscale.ListNodes("user1")
assert.NoError(c, err)
assert.Len(c, user1Nodes, 1)
}, 30*time.Second, 500*time.Millisecond, "waiting for node to be registered")
nodeID := user1Nodes[0].GetId()
nodeName := user1Nodes[0].GetName()
t.Logf("Node %d (%s) created successfully with auth_key_id=%d", nodeID, nodeName, authKeyID)
// Verify node is online
requireAllClientsOnline(t, headscale, []types.NodeID{types.NodeID(nodeID)}, true, "node should be online initially", 120*time.Second)
// DELETE the pre-auth key using the API
t.Logf("Deleting pre-auth key ID %d using API", authKeyID)
err = headscale.DeleteAuthKey(userID, authKeyString)
require.NoError(t, err)
t.Logf("Successfully deleted auth key")
// Simulate node restart (down + up)
t.Logf("Restarting node after deleting its auth key")
err = client.Down()
require.NoError(t, err)
time.Sleep(3 * time.Second)
err = client.Up()
require.NoError(t, err)
// Verify node comes back online
// This will FAIL without the fix because auth key validation will reject deleted key
// With the fix, MachineKey identity allows reconnection even with deleted key
requireAllClientsOnline(t, headscale, []types.NodeID{types.NodeID(nodeID)}, true, "node should reconnect after restart despite deleted key", 120*time.Second)
t.Logf("✓ Node successfully reconnected after its auth key was deleted")
}
// TestAuthKeyLogoutAndReloginRoutesPreserved tests that routes remain serving
// after a node logs out and re-authenticates with the same user.
//
// This test validates the fix for issue #2896:
// https://github.com/juanfont/headscale/issues/2896
//
// Bug: When a node with already-approved routes restarts/re-authenticates,
// the routes show as "Approved" and "Available" but NOT "Serving" (Primary).
// A headscale restart would fix it, indicating a state management issue.
//
// The test scenario:
// 1. Node registers with auth key and advertises routes
// 2. Routes are auto-approved and verified as serving
// 3. Node logs out
// 4. Node re-authenticates with same auth key
// 5. Routes should STILL be serving (this is where the bug manifests)
func TestAuthKeyLogoutAndReloginRoutesPreserved(t *testing.T) {
IntegrationSkip(t)
user := "routeuser"
advertiseRoute := "10.55.0.0/24"
spec := ScenarioSpec{
NodesPerUser: 1,
Users: []string{user},
}
scenario, err := NewScenario(spec)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
err = scenario.CreateHeadscaleEnv(
[]tsic.Option{
tsic.WithAcceptRoutes(),
// Advertise route on initial login
tsic.WithExtraLoginArgs([]string{"--advertise-routes=" + advertiseRoute}),
},
hsic.WithTestName("routelogout"),
hsic.WithTLS(),
hsic.WithACLPolicy(
&policyv2.Policy{
ACLs: []policyv2.ACL{
{
Action: "accept",
Sources: []policyv2.Alias{policyv2.Wildcard},
Destinations: []policyv2.AliasWithPorts{{Alias: policyv2.Wildcard, Ports: []tailcfg.PortRange{tailcfg.PortRangeAny}}},
},
},
AutoApprovers: policyv2.AutoApproverPolicy{
Routes: map[netip.Prefix]policyv2.AutoApprovers{
netip.MustParsePrefix(advertiseRoute): {ptr.To(policyv2.Username(user + "@test.no"))},
},
},
},
),
)
requireNoErrHeadscaleEnv(t, err)
allClients, err := scenario.ListTailscaleClients()
requireNoErrListClients(t, err)
require.Len(t, allClients, 1)
client := allClients[0]
err = scenario.WaitForTailscaleSync()
requireNoErrSync(t, err)
headscale, err := scenario.Headscale()
requireNoErrGetHeadscale(t, err)
// Step 1: Verify initial route is advertised, approved, and SERVING
t.Logf("Step 1: Verifying initial route is advertised, approved, and SERVING at %s", time.Now().Format(TimestampFormat))
var initialNode *v1.Node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1, "Should have exactly 1 node")
if len(nodes) == 1 {
initialNode = nodes[0]
// Check: 1 announced, 1 approved, 1 serving (subnet route)
assert.Lenf(c, initialNode.GetAvailableRoutes(), 1,
"Node should have 1 available route, got %v", initialNode.GetAvailableRoutes())
assert.Lenf(c, initialNode.GetApprovedRoutes(), 1,
"Node should have 1 approved route, got %v", initialNode.GetApprovedRoutes())
assert.Lenf(c, initialNode.GetSubnetRoutes(), 1,
"Node should have 1 serving (subnet) route, got %v - THIS IS THE BUG if empty", initialNode.GetSubnetRoutes())
assert.Contains(c, initialNode.GetSubnetRoutes(), advertiseRoute,
"Subnet routes should contain %s", advertiseRoute)
}
}, 30*time.Second, 500*time.Millisecond, "initial route should be serving")
require.NotNil(t, initialNode, "Initial node should be found")
initialNodeID := initialNode.GetId()
t.Logf("Initial node ID: %d, Available: %v, Approved: %v, Serving: %v",
initialNodeID, initialNode.GetAvailableRoutes(), initialNode.GetApprovedRoutes(), initialNode.GetSubnetRoutes())
// Step 2: Logout
t.Logf("Step 2: Logging out at %s", time.Now().Format(TimestampFormat))
err = client.Logout()
require.NoError(t, err)
// Wait for logout to complete
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := client.Status()
assert.NoError(ct, err)
assert.Equal(ct, "NeedsLogin", status.BackendState, "Expected NeedsLogin state after logout")
}, 30*time.Second, 1*time.Second, "waiting for logout to complete")
t.Logf("Logout completed, node should still exist in database")
// Verify node still exists (routes should still be in DB)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1, "Node should persist in database after logout")
}, 10*time.Second, 500*time.Millisecond, "node should persist after logout")
// Step 3: Re-authenticate with the SAME user (using auth key)
t.Logf("Step 3: Re-authenticating with same user at %s", time.Now().Format(TimestampFormat))
userMap, err := headscale.MapUsers()
require.NoError(t, err)
key, err := scenario.CreatePreAuthKey(userMap[user].GetId(), true, false)
require.NoError(t, err)
// Re-login - the container already has extraLoginArgs with --advertise-routes
// from the initial setup, so routes will be advertised on re-login
err = scenario.RunTailscaleUp(user, headscale.GetEndpoint(), key.GetKey())
require.NoError(t, err)
// Wait for client to be running
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := client.Status()
assert.NoError(ct, err)
assert.Equal(ct, "Running", status.BackendState, "Expected Running state after relogin")
}, 30*time.Second, 1*time.Second, "waiting for relogin to complete")
t.Logf("Re-authentication completed at %s", time.Now().Format(TimestampFormat))
// Step 4: THE CRITICAL TEST - Verify routes are STILL SERVING after re-authentication
t.Logf("Step 4: Verifying routes are STILL SERVING after re-authentication at %s", time.Now().Format(TimestampFormat))
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1, "Should still have exactly 1 node after relogin")
if len(nodes) == 1 {
node := nodes[0]
t.Logf("After relogin - Available: %v, Approved: %v, Serving: %v",
node.GetAvailableRoutes(), node.GetApprovedRoutes(), node.GetSubnetRoutes())
// This is where issue #2896 manifests:
// - Available shows the route (from Hostinfo.RoutableIPs)
// - Approved shows the route (from ApprovedRoutes)
// - BUT Serving (SubnetRoutes/PrimaryRoutes) is EMPTY!
assert.Lenf(c, node.GetAvailableRoutes(), 1,
"Node should have 1 available route after relogin, got %v", node.GetAvailableRoutes())
assert.Lenf(c, node.GetApprovedRoutes(), 1,
"Node should have 1 approved route after relogin, got %v", node.GetApprovedRoutes())
assert.Lenf(c, node.GetSubnetRoutes(), 1,
"BUG #2896: Node should have 1 SERVING route after relogin, got %v", node.GetSubnetRoutes())
assert.Contains(c, node.GetSubnetRoutes(), advertiseRoute,
"BUG #2896: Subnet routes should contain %s after relogin", advertiseRoute)
// Also verify node ID was preserved (same node, not new registration)
assert.Equal(c, initialNodeID, node.GetId(),
"Node ID should be preserved after same-user relogin")
}
}, 30*time.Second, 500*time.Millisecond,
"BUG #2896: routes should remain SERVING after logout/relogin with same user")
t.Logf("Test completed - verifying issue #2896 fix")
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/tsic"
@@ -19,6 +20,8 @@ import (
"github.com/samber/lo"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
)
func TestOIDCAuthenticationPingAll(t *testing.T) {
@@ -953,6 +956,119 @@ func TestOIDCFollowUpUrl(t *testing.T) {
}, 10*time.Second, 200*time.Millisecond, "Waiting for expected node list after OIDC login")
}
// TestOIDCMultipleOpenedLoginUrls tests the scenario:
// - client (mostly Windows) opens multiple browser tabs with different login URLs
// - client performs auth on the first opened browser tab
//
// This test makes sure that cookies are still valid for the first browser tab.
func TestOIDCMultipleOpenedLoginUrls(t *testing.T) {
IntegrationSkip(t)
scenario, err := NewScenario(
ScenarioSpec{
OIDCUsers: []mockoidc.MockUser{
oidcMockUser("user1", true),
},
},
)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
oidcMap := map[string]string{
"HEADSCALE_OIDC_ISSUER": scenario.mockOIDC.Issuer(),
"HEADSCALE_OIDC_CLIENT_ID": scenario.mockOIDC.ClientID(),
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
}
err = scenario.CreateHeadscaleEnvWithLoginURL(
nil,
hsic.WithTestName("oidcauthrelog"),
hsic.WithConfigEnv(oidcMap),
hsic.WithTLS(),
hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(scenario.mockOIDC.ClientSecret())),
hsic.WithEmbeddedDERPServerOnly(),
)
require.NoError(t, err)
headscale, err := scenario.Headscale()
require.NoError(t, err)
listUsers, err := headscale.ListUsers()
require.NoError(t, err)
assert.Empty(t, listUsers)
ts, err := scenario.CreateTailscaleNode(
"unstable",
tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
)
require.NoError(t, err)
u1, err := ts.LoginWithURL(headscale.GetEndpoint())
require.NoError(t, err)
u2, err := ts.LoginWithURL(headscale.GetEndpoint())
require.NoError(t, err)
// make sure login URLs are different
require.NotEqual(t, u1.String(), u2.String())
loginClient, err := newLoginHTTPClient(ts.Hostname())
require.NoError(t, err)
// open the first login URL "in browser"
_, redirect1, err := doLoginURLWithClient(ts.Hostname(), u1, loginClient, false)
require.NoError(t, err)
// open the second login URL "in browser"
_, redirect2, err := doLoginURLWithClient(ts.Hostname(), u2, loginClient, false)
require.NoError(t, err)
// two valid redirects with different state/nonce params
require.NotEqual(t, redirect1.String(), redirect2.String())
// complete auth with the first opened "browser tab"
_, redirect1, err = doLoginURLWithClient(ts.Hostname(), redirect1, loginClient, true)
require.NoError(t, err)
listUsers, err = headscale.ListUsers()
require.NoError(t, err)
assert.Len(t, listUsers, 1)
wantUsers := []*v1.User{
{
Id: 1,
Name: "user1",
Email: "user1@headscale.net",
Provider: "oidc",
ProviderId: scenario.mockOIDC.Issuer() + "/user1",
},
}
sort.Slice(
listUsers, func(i, j int) bool {
return listUsers[i].GetId() < listUsers[j].GetId()
},
)
if diff := cmp.Diff(
wantUsers,
listUsers,
cmpopts.IgnoreUnexported(v1.User{}),
cmpopts.IgnoreFields(v1.User{}, "CreatedAt"),
); diff != "" {
t.Fatalf("unexpected users: %s", diff)
}
assert.EventuallyWithT(
t, func(c *assert.CollectT) {
listNodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, listNodes, 1)
}, 10*time.Second, 200*time.Millisecond, "Waiting for expected node list after OIDC login",
)
}
// TestOIDCReloginSameNodeSameUser tests the scenario where a single Tailscale client
// authenticates using OIDC (OpenID Connect), logs out, and then logs back in as the same user.
//
@@ -1181,3 +1297,618 @@ func TestOIDCReloginSameNodeSameUser(t *testing.T) {
}
}, 60*time.Second, 2*time.Second, "validating user1 node is online after same-user OIDC relogin")
}
// TestOIDCExpiryAfterRestart validates that node expiry is preserved
// when a tailscaled client restarts and reconnects to headscale.
//
// This test reproduces the bug reported in https://github.com/juanfont/headscale/issues/2862
// where OIDC expiry was reset to 0001-01-01 00:00:00 after tailscaled restart.
//
// Test flow:
// 1. Node logs in with OIDC (gets 72h expiry)
// 2. Verify expiry is set correctly in headscale
// 3. Restart tailscaled container (simulates daemon restart)
// 4. Wait for reconnection
// 5. Verify expiry is still set correctly (not zero).
func TestOIDCExpiryAfterRestart(t *testing.T) {
IntegrationSkip(t)
scenario, err := NewScenario(ScenarioSpec{
OIDCUsers: []mockoidc.MockUser{
oidcMockUser("user1", true),
},
})
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
oidcMap := map[string]string{
"HEADSCALE_OIDC_ISSUER": scenario.mockOIDC.Issuer(),
"HEADSCALE_OIDC_CLIENT_ID": scenario.mockOIDC.ClientID(),
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
"HEADSCALE_OIDC_EXPIRY": "72h",
}
err = scenario.CreateHeadscaleEnvWithLoginURL(
nil,
hsic.WithTestName("oidcexpiry"),
hsic.WithConfigEnv(oidcMap),
hsic.WithTLS(),
hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(scenario.mockOIDC.ClientSecret())),
hsic.WithEmbeddedDERPServerOnly(),
hsic.WithDERPAsIP(),
)
requireNoErrHeadscaleEnv(t, err)
headscale, err := scenario.Headscale()
require.NoError(t, err)
// Create and login tailscale client
ts, err := scenario.CreateTailscaleNode("unstable", tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]))
require.NoError(t, err)
u, err := ts.LoginWithURL(headscale.GetEndpoint())
require.NoError(t, err)
_, err = doLoginURL(ts.Hostname(), u)
require.NoError(t, err)
t.Logf("Validating initial login and expiry at %s", time.Now().Format(TimestampFormat))
// Verify initial expiry is set
var initialExpiry time.Time
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(ct, err)
assert.Len(ct, nodes, 1)
node := nodes[0]
assert.NotNil(ct, node.GetExpiry(), "Expiry should be set after OIDC login")
if node.GetExpiry() != nil {
expiryTime := node.GetExpiry().AsTime()
assert.False(ct, expiryTime.IsZero(), "Expiry should not be zero time")
initialExpiry = expiryTime
t.Logf("Initial expiry set to: %v (expires in %v)", expiryTime, time.Until(expiryTime))
}
}, 30*time.Second, 1*time.Second, "validating initial expiry after OIDC login")
// Now restart the tailscaled container
t.Logf("Restarting tailscaled container at %s", time.Now().Format(TimestampFormat))
err = ts.Restart()
require.NoError(t, err, "Failed to restart tailscaled container")
t.Logf("Tailscaled restarted, waiting for reconnection at %s", time.Now().Format(TimestampFormat))
// Wait for the node to come back online
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := ts.Status()
if !assert.NoError(ct, err) {
return
}
if !assert.NotNil(ct, status) {
return
}
assert.Equal(ct, "Running", status.BackendState)
}, 60*time.Second, 2*time.Second, "waiting for tailscale to reconnect after restart")
// THE CRITICAL TEST: Verify expiry is still set correctly after restart
t.Logf("Validating expiry preservation after restart at %s", time.Now().Format(TimestampFormat))
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(ct, err)
assert.Len(ct, nodes, 1, "Should still have exactly 1 node after restart")
node := nodes[0]
assert.NotNil(ct, node.GetExpiry(), "Expiry should NOT be nil after restart")
if node.GetExpiry() != nil {
expiryTime := node.GetExpiry().AsTime()
// This is the bug check - expiry should NOT be zero time
assert.False(ct, expiryTime.IsZero(),
"BUG: Expiry was reset to zero time after tailscaled restart! This is issue #2862")
// Expiry should be exactly the same as before restart
assert.Equal(ct, initialExpiry, expiryTime,
"Expiry should be exactly the same after restart, got %v, expected %v",
expiryTime, initialExpiry)
t.Logf("SUCCESS: Expiry preserved after restart: %v (expires in %v)",
expiryTime, time.Until(expiryTime))
}
}, 30*time.Second, 1*time.Second, "validating expiry preservation after restart")
}
// TestOIDCACLPolicyOnJoin validates that ACL policies are correctly applied
// to newly joined OIDC nodes without requiring a client restart.
//
// This test validates the fix for issue #2888:
// https://github.com/juanfont/headscale/issues/2888
//
// Bug: Nodes joining via OIDC authentication did not get the appropriate ACL
// policy applied until they restarted their client. This was a regression
// introduced in v0.27.0.
//
// The test scenario:
// 1. Creates a CLI user (gateway) with a node advertising a route
// 2. Sets up ACL policy allowing all nodes to access advertised routes
// 3. OIDC user authenticates and joins with a new node
// 4. Verifies that the OIDC user's node IMMEDIATELY sees the advertised route
//
// Expected behavior:
// - Without fix: OIDC node cannot see the route (PrimaryRoutes is nil/empty)
// - With fix: OIDC node immediately sees the route in PrimaryRoutes
//
// Root cause: The buggy code called a.h.Change(c) immediately after user
// creation but BEFORE node registration completed, creating a race condition
// where policy change notifications were sent asynchronously before the node
// was fully registered.
func TestOIDCACLPolicyOnJoin(t *testing.T) {
IntegrationSkip(t)
gatewayUser := "gateway"
oidcUser := "oidcuser"
spec := ScenarioSpec{
NodesPerUser: 1,
Users: []string{gatewayUser},
OIDCUsers: []mockoidc.MockUser{
oidcMockUser(oidcUser, true),
},
}
scenario, err := NewScenario(spec)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
oidcMap := map[string]string{
"HEADSCALE_OIDC_ISSUER": scenario.mockOIDC.Issuer(),
"HEADSCALE_OIDC_CLIENT_ID": scenario.mockOIDC.ClientID(),
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
}
// Create headscale environment with ACL policy that allows OIDC user
// to access routes advertised by gateway user
err = scenario.CreateHeadscaleEnvWithLoginURL(
[]tsic.Option{
tsic.WithAcceptRoutes(),
},
hsic.WithTestName("oidcaclpolicy"),
hsic.WithConfigEnv(oidcMap),
hsic.WithTLS(),
hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(scenario.mockOIDC.ClientSecret())),
hsic.WithACLPolicy(
&policyv2.Policy{
ACLs: []policyv2.ACL{
{
Action: "accept",
Sources: []policyv2.Alias{prefixp("100.64.0.0/10")},
Destinations: []policyv2.AliasWithPorts{
aliasWithPorts(prefixp("100.64.0.0/10"), tailcfg.PortRangeAny),
aliasWithPorts(prefixp("10.33.0.0/24"), tailcfg.PortRangeAny),
aliasWithPorts(prefixp("10.44.0.0/24"), tailcfg.PortRangeAny),
},
},
},
AutoApprovers: policyv2.AutoApproverPolicy{
Routes: map[netip.Prefix]policyv2.AutoApprovers{
netip.MustParsePrefix("10.33.0.0/24"): {usernameApprover("gateway@test.no"), usernameApprover("oidcuser@headscale.net"), usernameApprover("jane.doe@example.com")},
netip.MustParsePrefix("10.44.0.0/24"): {usernameApprover("gateway@test.no"), usernameApprover("oidcuser@headscale.net"), usernameApprover("jane.doe@example.com")},
},
},
},
),
)
requireNoErrHeadscaleEnv(t, err)
headscale, err := scenario.Headscale()
require.NoError(t, err)
// Get the gateway client (CLI user) - only one client at first
allClients, err := scenario.ListTailscaleClients()
requireNoErrListClients(t, err)
require.Len(t, allClients, 1, "Should have exactly 1 client (gateway) before OIDC login")
gatewayClient := allClients[0]
// Wait for initial sync (gateway logs in)
err = scenario.WaitForTailscaleSync()
requireNoErrSync(t, err)
// Gateway advertises route 10.33.0.0/24
advertiseRoute := "10.33.0.0/24"
command := []string{
"tailscale",
"set",
"--advertise-routes=" + advertiseRoute,
}
_, _, err = gatewayClient.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
// Wait for route advertisement to propagate
var gatewayNodeID uint64
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(ct, err)
assert.Len(ct, nodes, 1)
gatewayNode := nodes[0]
gatewayNodeID = gatewayNode.GetId()
assert.Len(ct, gatewayNode.GetAvailableRoutes(), 1)
assert.Contains(ct, gatewayNode.GetAvailableRoutes(), advertiseRoute)
}, 10*time.Second, 500*time.Millisecond, "route advertisement should propagate to headscale")
// Approve the advertised route
_, err = headscale.ApproveRoutes(
gatewayNodeID,
[]netip.Prefix{netip.MustParsePrefix(advertiseRoute)},
)
require.NoError(t, err)
// Wait for route approval to propagate
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(ct, err)
assert.Len(ct, nodes, 1)
gatewayNode := nodes[0]
assert.Len(ct, gatewayNode.GetApprovedRoutes(), 1)
assert.Contains(ct, gatewayNode.GetApprovedRoutes(), advertiseRoute)
}, 10*time.Second, 500*time.Millisecond, "route approval should propagate to headscale")
// NOW create the OIDC user by having them join
// This is where issue #2888 manifests - the new OIDC node should immediately
// see the gateway's advertised route
t.Logf("OIDC user joining at %s", time.Now().Format(TimestampFormat))
// Create OIDC user's tailscale node
oidcAdvertiseRoute := "10.44.0.0/24"
oidcClient, err := scenario.CreateTailscaleNode(
"head",
tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
tsic.WithAcceptRoutes(),
tsic.WithExtraLoginArgs([]string{"--advertise-routes=" + oidcAdvertiseRoute}),
)
require.NoError(t, err)
// OIDC login happens automatically via LoginWithURL
loginURL, err := oidcClient.LoginWithURL(headscale.GetEndpoint())
require.NoError(t, err)
_, err = doLoginURL(oidcClient.Hostname(), loginURL)
require.NoError(t, err)
t.Logf("OIDC user logged in successfully at %s", time.Now().Format(TimestampFormat))
// THE CRITICAL TEST: Verify that the OIDC user's node can IMMEDIATELY
// see the gateway's advertised route WITHOUT needing a client restart.
//
// This is where the bug manifests:
// - Without fix: PrimaryRoutes will be nil/empty
// - With fix: PrimaryRoutes immediately contains the advertised route
t.Logf("Verifying OIDC user can immediately see advertised routes at %s", time.Now().Format(TimestampFormat))
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := oidcClient.Status()
assert.NoError(ct, err)
// Find the gateway peer in the OIDC user's peer list
var gatewayPeer *ipnstate.PeerStatus
for _, peerKey := range status.Peers() {
peer := status.Peer[peerKey]
// Gateway is the peer that's not the OIDC user
if peer.UserID != status.Self.UserID {
gatewayPeer = peer
break
}
}
assert.NotNil(ct, gatewayPeer, "OIDC user should see gateway as peer")
if gatewayPeer != nil {
// This is the critical assertion - PrimaryRoutes should NOT be nil
assert.NotNil(ct, gatewayPeer.PrimaryRoutes,
"BUG #2888: Gateway peer PrimaryRoutes is nil - ACL policy not applied to new OIDC node!")
if gatewayPeer.PrimaryRoutes != nil {
routes := gatewayPeer.PrimaryRoutes.AsSlice()
assert.Contains(ct, routes, netip.MustParsePrefix(advertiseRoute),
"OIDC user should immediately see gateway's advertised route %s in PrimaryRoutes", advertiseRoute)
t.Logf("SUCCESS: OIDC user can see advertised route %s in gateway's PrimaryRoutes", advertiseRoute)
}
// Also verify AllowedIPs includes the route
if gatewayPeer.AllowedIPs != nil && gatewayPeer.AllowedIPs.Len() > 0 {
allowedIPs := gatewayPeer.AllowedIPs.AsSlice()
t.Logf("Gateway peer AllowedIPs: %v", allowedIPs)
}
}
}, 15*time.Second, 500*time.Millisecond,
"OIDC user should immediately see gateway's advertised route without client restart (issue #2888)")
// Verify that the Gateway node sees the OIDC node's advertised route (AutoApproveRoutes check)
t.Logf("Verifying Gateway user can immediately see OIDC advertised routes at %s", time.Now().Format(TimestampFormat))
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := gatewayClient.Status()
assert.NoError(ct, err)
// Find the OIDC peer in the Gateway user's peer list
var oidcPeer *ipnstate.PeerStatus
for _, peerKey := range status.Peers() {
peer := status.Peer[peerKey]
if peer.UserID != status.Self.UserID {
oidcPeer = peer
break
}
}
assert.NotNil(ct, oidcPeer, "Gateway user should see OIDC user as peer")
if oidcPeer != nil {
assert.NotNil(ct, oidcPeer.PrimaryRoutes,
"BUG: OIDC peer PrimaryRoutes is nil - AutoApproveRoutes failed or overwritten!")
if oidcPeer.PrimaryRoutes != nil {
routes := oidcPeer.PrimaryRoutes.AsSlice()
assert.Contains(ct, routes, netip.MustParsePrefix(oidcAdvertiseRoute),
"Gateway user should immediately see OIDC's advertised route %s in PrimaryRoutes", oidcAdvertiseRoute)
}
}
}, 15*time.Second, 500*time.Millisecond,
"Gateway user should immediately see OIDC's advertised route (AutoApproveRoutes check)")
// Additional validation: Verify nodes in headscale match expectations
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(ct, err)
assert.Len(ct, nodes, 2, "Should have 2 nodes (gateway + oidcuser)")
// Verify OIDC user was created correctly
users, err := headscale.ListUsers()
assert.NoError(ct, err)
// Note: mockoidc may create additional default users (like jane.doe)
// so we check for at least 2 users, not exactly 2
assert.GreaterOrEqual(ct, len(users), 2, "Should have at least 2 users (gateway CLI user + oidcuser)")
// Find gateway CLI user
var gatewayUser *v1.User
for _, user := range users {
if user.GetName() == "gateway" && user.GetProvider() == "" {
gatewayUser = user
break
}
}
assert.NotNil(ct, gatewayUser, "Should have gateway CLI user")
if gatewayUser != nil {
assert.Equal(ct, "gateway", gatewayUser.GetName())
}
// Find OIDC user
var oidcUserFound *v1.User
for _, user := range users {
if user.GetName() == "oidcuser" && user.GetProvider() == "oidc" {
oidcUserFound = user
break
}
}
assert.NotNil(ct, oidcUserFound, "Should have OIDC user")
if oidcUserFound != nil {
assert.Equal(ct, "oidcuser", oidcUserFound.GetName())
assert.Equal(ct, "oidcuser@headscale.net", oidcUserFound.GetEmail())
}
}, 10*time.Second, 500*time.Millisecond, "headscale should have correct users and nodes")
t.Logf("Test completed successfully - issue #2888 fix validated")
}
// TestOIDCReloginSameUserRoutesPreserved tests the scenario where:
// - A node logs in via OIDC and advertises routes
// - Routes are auto-approved and verified as SERVING
// - The node logs out
// - The node logs back in as the same user
// - Routes should STILL be SERVING (not just approved/available)
//
// This test validates the fix for issue #2896:
// https://github.com/juanfont/headscale/issues/2896
//
// Bug: When a node with already-approved routes restarts/re-authenticates,
// the routes show as "Approved" and "Available" but NOT "Serving" (Primary).
// A headscale restart would fix it, indicating a state management issue.
func TestOIDCReloginSameUserRoutesPreserved(t *testing.T) {
IntegrationSkip(t)
advertiseRoute := "10.55.0.0/24"
// Create scenario with same user for both login attempts
scenario, err := NewScenario(ScenarioSpec{
OIDCUsers: []mockoidc.MockUser{
oidcMockUser("user1", true), // Initial login
oidcMockUser("user1", true), // Relogin with same user
},
})
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
oidcMap := map[string]string{
"HEADSCALE_OIDC_ISSUER": scenario.mockOIDC.Issuer(),
"HEADSCALE_OIDC_CLIENT_ID": scenario.mockOIDC.ClientID(),
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
}
err = scenario.CreateHeadscaleEnvWithLoginURL(
[]tsic.Option{
tsic.WithAcceptRoutes(),
},
hsic.WithTestName("oidcrouterelogin"),
hsic.WithConfigEnv(oidcMap),
hsic.WithTLS(),
hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(scenario.mockOIDC.ClientSecret())),
hsic.WithEmbeddedDERPServerOnly(),
hsic.WithDERPAsIP(),
hsic.WithACLPolicy(
&policyv2.Policy{
ACLs: []policyv2.ACL{
{
Action: "accept",
Sources: []policyv2.Alias{policyv2.Wildcard},
Destinations: []policyv2.AliasWithPorts{{Alias: policyv2.Wildcard, Ports: []tailcfg.PortRange{tailcfg.PortRangeAny}}},
},
},
AutoApprovers: policyv2.AutoApproverPolicy{
Routes: map[netip.Prefix]policyv2.AutoApprovers{
netip.MustParsePrefix(advertiseRoute): {usernameApprover("user1@headscale.net")},
},
},
},
),
)
requireNoErrHeadscaleEnv(t, err)
headscale, err := scenario.Headscale()
require.NoError(t, err)
// Create client with route advertisement
ts, err := scenario.CreateTailscaleNode(
"unstable",
tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
tsic.WithAcceptRoutes(),
tsic.WithExtraLoginArgs([]string{"--advertise-routes=" + advertiseRoute}),
)
require.NoError(t, err)
// Initial login as user1
u, err := ts.LoginWithURL(headscale.GetEndpoint())
require.NoError(t, err)
_, err = doLoginURL(ts.Hostname(), u)
require.NoError(t, err)
// Wait for client to be running
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := ts.Status()
assert.NoError(ct, err)
assert.Equal(ct, "Running", status.BackendState)
}, 30*time.Second, 1*time.Second, "waiting for initial login to complete")
// Step 1: Verify initial route is advertised, approved, and SERVING
t.Logf("Step 1: Verifying initial route is advertised, approved, and SERVING at %s", time.Now().Format(TimestampFormat))
var initialNode *v1.Node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1, "Should have exactly 1 node")
if len(nodes) == 1 {
initialNode = nodes[0]
// Check: 1 announced, 1 approved, 1 serving (subnet route)
assert.Lenf(c, initialNode.GetAvailableRoutes(), 1,
"Node should have 1 available route, got %v", initialNode.GetAvailableRoutes())
assert.Lenf(c, initialNode.GetApprovedRoutes(), 1,
"Node should have 1 approved route, got %v", initialNode.GetApprovedRoutes())
assert.Lenf(c, initialNode.GetSubnetRoutes(), 1,
"Node should have 1 serving (subnet) route, got %v - THIS IS THE BUG if empty", initialNode.GetSubnetRoutes())
assert.Contains(c, initialNode.GetSubnetRoutes(), advertiseRoute,
"Subnet routes should contain %s", advertiseRoute)
}
}, 30*time.Second, 500*time.Millisecond, "initial route should be serving")
require.NotNil(t, initialNode, "Initial node should be found")
initialNodeID := initialNode.GetId()
t.Logf("Initial node ID: %d, Available: %v, Approved: %v, Serving: %v",
initialNodeID, initialNode.GetAvailableRoutes(), initialNode.GetApprovedRoutes(), initialNode.GetSubnetRoutes())
// Step 2: Logout
t.Logf("Step 2: Logging out at %s", time.Now().Format(TimestampFormat))
err = ts.Logout()
require.NoError(t, err)
// Wait for logout to complete
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := ts.Status()
assert.NoError(ct, err)
assert.Equal(ct, "NeedsLogin", status.BackendState, "Expected NeedsLogin state after logout")
}, 30*time.Second, 1*time.Second, "waiting for logout to complete")
t.Logf("Logout completed, node should still exist in database")
// Verify node still exists (routes should still be in DB)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1, "Node should persist in database after logout")
}, 10*time.Second, 500*time.Millisecond, "node should persist after logout")
// Step 3: Re-authenticate via OIDC as the same user
t.Logf("Step 3: Re-authenticating with same user via OIDC at %s", time.Now().Format(TimestampFormat))
u, err = ts.LoginWithURL(headscale.GetEndpoint())
require.NoError(t, err)
_, err = doLoginURL(ts.Hostname(), u)
require.NoError(t, err)
// Wait for client to be running
assert.EventuallyWithT(t, func(ct *assert.CollectT) {
status, err := ts.Status()
assert.NoError(ct, err)
assert.Equal(ct, "Running", status.BackendState, "Expected Running state after relogin")
}, 30*time.Second, 1*time.Second, "waiting for relogin to complete")
t.Logf("Re-authentication completed at %s", time.Now().Format(TimestampFormat))
// Step 4: THE CRITICAL TEST - Verify routes are STILL SERVING after re-authentication
t.Logf("Step 4: Verifying routes are STILL SERVING after re-authentication at %s", time.Now().Format(TimestampFormat))
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1, "Should still have exactly 1 node after relogin")
if len(nodes) == 1 {
node := nodes[0]
t.Logf("After relogin - Available: %v, Approved: %v, Serving: %v",
node.GetAvailableRoutes(), node.GetApprovedRoutes(), node.GetSubnetRoutes())
// This is where issue #2896 manifests:
// - Available shows the route (from Hostinfo.RoutableIPs)
// - Approved shows the route (from ApprovedRoutes)
// - BUT Serving (SubnetRoutes/PrimaryRoutes) is EMPTY!
assert.Lenf(c, node.GetAvailableRoutes(), 1,
"Node should have 1 available route after relogin, got %v", node.GetAvailableRoutes())
assert.Lenf(c, node.GetApprovedRoutes(), 1,
"Node should have 1 approved route after relogin, got %v", node.GetApprovedRoutes())
assert.Lenf(c, node.GetSubnetRoutes(), 1,
"BUG #2896: Node should have 1 SERVING route after relogin, got %v", node.GetSubnetRoutes())
assert.Contains(c, node.GetSubnetRoutes(), advertiseRoute,
"BUG #2896: Subnet routes should contain %s after relogin", advertiseRoute)
// Also verify node ID was preserved (same node, not new registration)
assert.Equal(c, initialNodeID, node.GetId(),
"Node ID should be preserved after same-user relogin")
}
}, 30*time.Second, 500*time.Millisecond,
"BUG #2896: routes should remain SERVING after OIDC logout/relogin with same user")
t.Logf("Test completed - verifying issue #2896 fix for OIDC")
}

View File

@@ -24,6 +24,7 @@ type ControlServer interface {
WaitForRunning() error
CreateUser(user string) (*v1.User, error)
CreateAuthKey(user uint64, reusable bool, ephemeral bool) (*v1.PreAuthKey, error)
DeleteAuthKey(user uint64, key string) error
ListNodes(users ...string) ([]*v1.Node, error)
DeleteNode(nodeID uint64) error
NodesByUser() (map[string][]*v1.Node, error)

View File

@@ -1031,6 +1031,34 @@ func (t *HeadscaleInContainer) CreateAuthKey(
return &preAuthKey, nil
}
// DeleteAuthKey deletes an "authorisation key" for a User.
func (t *HeadscaleInContainer) DeleteAuthKey(
user uint64,
key string,
) error {
command := []string{
"headscale",
"--user",
strconv.FormatUint(user, 10),
"preauthkeys",
"delete",
key,
"--output",
"json",
}
_, _, err := dockertestutil.ExecuteCommand(
t.container,
command,
[]string{},
)
if err != nil {
return fmt.Errorf("failed to execute delete auth key command: %w", err)
}
return nil
}
// ListNodes lists the currently registered Nodes in headscale.
// Optionally a list of usernames can be passed to get users for
// specific users.

View File

@@ -20,6 +20,7 @@ import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/integrationutil"
"github.com/juanfont/headscale/integration/tsic"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -2227,9 +2228,9 @@ func TestAutoApproveMultiNetwork(t *testing.T) {
// - Both policy modes (database, file)
// - Both advertiseDuringUp values (true, false)
minimalTestSet := map[string]bool{
"authkey-tag-advertiseduringup-false-pol-database": true, // authkey + database + tag + false
"webauth-user-advertiseduringup-true-pol-file": true, // webauth + file + user + true
"authkey-group-advertiseduringup-false-pol-file": true, // authkey + file + group + false
"authkey-tag-advertiseduringup-false-pol-database": true, // authkey + database + tag + false
"webauth-user-advertiseduringup-true-pol-file": true, // webauth + file + user + true
"authkey-group-advertiseduringup-false-pol-file": true, // authkey + file + group + false
}
for _, tt := range tests {
@@ -2323,7 +2324,11 @@ func TestAutoApproveMultiNetwork(t *testing.T) {
// into a HA node, which isn't something we are testing here.
routerUsernet1, err := scenario.CreateTailscaleNode("head", tsOpts...)
require.NoError(t, err)
defer routerUsernet1.Shutdown()
defer func() {
_, _, err := routerUsernet1.Shutdown()
require.NoError(t, err)
}()
if tt.withURL {
u, err := routerUsernet1.LoginWithURL(headscale.GetEndpoint())
@@ -2332,7 +2337,14 @@ func TestAutoApproveMultiNetwork(t *testing.T) {
body, err := doLoginURL(routerUsernet1.Hostname(), u)
require.NoError(t, err)
scenario.runHeadscaleRegister("user1", body)
err = scenario.runHeadscaleRegister("user1", body)
require.NoError(t, err)
// Wait for the client to sync with the server after webauth registration.
// Unlike authkey login which blocks until complete, webauth registration
// happens on the server side and the client needs time to receive the network map.
err = routerUsernet1.WaitForRunning(integrationutil.PeerSyncTimeout())
require.NoError(t, err, "webauth client failed to reach Running state")
} else {
userMap, err := headscale.MapUsers()
require.NoError(t, err)
@@ -2345,6 +2357,11 @@ func TestAutoApproveMultiNetwork(t *testing.T) {
}
// extra creation end.
// Wait for the node to be fully running before getting its ID
// This is especially important for webauth flow where login is asynchronous
err = routerUsernet1.WaitForRunning(30 * time.Second)
require.NoError(t, err)
routerUsernet1ID := routerUsernet1.MustID()
web := services[0]
@@ -2732,16 +2749,6 @@ func TestAutoApproveMultiNetwork(t *testing.T) {
}
}
func assertTracerouteViaIP(t *testing.T, tr util.Traceroute, ip netip.Addr) {
t.Helper()
require.NotNil(t, tr)
require.True(t, tr.Success)
require.NoError(t, tr.Err)
require.NotEmpty(t, tr.Route)
require.Equal(t, tr.Route[0].IP, ip)
}
// assertTracerouteViaIPWithCollect is a version of assertTracerouteViaIP that works with assert.CollectT.
func assertTracerouteViaIPWithCollect(c *assert.CollectT, tr util.Traceroute, ip netip.Addr) {
assert.NotNil(c, tr)
@@ -2755,30 +2762,6 @@ func assertTracerouteViaIPWithCollect(c *assert.CollectT, tr util.Traceroute, ip
}
}
// requirePeerSubnetRoutes asserts that the peer has the expected subnet routes.
func requirePeerSubnetRoutes(t *testing.T, status *ipnstate.PeerStatus, expected []netip.Prefix) {
t.Helper()
if status.AllowedIPs.Len() <= 2 && len(expected) != 0 {
t.Fatalf("peer %s (%s) has no subnet routes, expected %v", status.HostName, status.ID, expected)
return
}
if len(expected) == 0 {
expected = []netip.Prefix{}
}
got := slicesx.Filter(nil, status.AllowedIPs.AsSlice(), func(p netip.Prefix) bool {
if tsaddr.IsExitRoute(p) {
return true
}
return !slices.ContainsFunc(status.TailscaleIPs, p.Contains)
})
if diff := cmpdiff.Diff(expected, got, util.PrefixComparer, cmpopts.EquateEmpty()); diff != "" {
t.Fatalf("peer %s (%s) subnet routes, unexpected result (-want +got):\n%s", status.HostName, status.ID, diff)
}
}
func SortPeerStatus(a, b *ipnstate.PeerStatus) int {
return cmp.Compare(a.ID, b.ID)
}
@@ -2823,13 +2806,6 @@ func requirePeerSubnetRoutesWithCollect(c *assert.CollectT, status *ipnstate.Pee
}
}
func requireNodeRouteCount(t *testing.T, node *v1.Node, announced, approved, subnet int) {
t.Helper()
require.Lenf(t, node.GetAvailableRoutes(), announced, "expected %q announced routes(%v) to have %d route, had %d", node.GetName(), node.GetAvailableRoutes(), announced, len(node.GetAvailableRoutes()))
require.Lenf(t, node.GetApprovedRoutes(), approved, "expected %q approved routes(%v) to have %d route, had %d", node.GetName(), node.GetApprovedRoutes(), approved, len(node.GetApprovedRoutes()))
require.Lenf(t, node.GetSubnetRoutes(), subnet, "expected %q subnet routes(%v) to have %d route, had %d", node.GetName(), node.GetSubnetRoutes(), subnet, len(node.GetSubnetRoutes()))
}
func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, announced, approved, subnet int) {
assert.Lenf(c, node.GetAvailableRoutes(), announced, "expected %q announced routes(%v) to have %d route, had %d", node.GetName(), node.GetAvailableRoutes(), announced, len(node.GetAvailableRoutes()))
assert.Lenf(c, node.GetApprovedRoutes(), approved, "expected %q approved routes(%v) to have %d route, had %d", node.GetName(), node.GetApprovedRoutes(), approved, len(node.GetApprovedRoutes()))

View File

@@ -860,47 +860,183 @@ func (s *Scenario) RunTailscaleUpWithURL(userStr, loginServer string) error {
return fmt.Errorf("failed to up tailscale node: %w", errNoUserAvailable)
}
// doLoginURL visits the given login URL and returns the body as a
// string.
func doLoginURL(hostname string, loginURL *url.URL) (string, error) {
log.Printf("%s login url: %s\n", hostname, loginURL.String())
type debugJar struct {
inner *cookiejar.Jar
mu sync.RWMutex
store map[string]map[string]map[string]*http.Cookie // domain -> path -> name -> cookie
}
var err error
func newDebugJar() (*debugJar, error) {
jar, err := cookiejar.New(nil)
if err != nil {
return nil, err
}
return &debugJar{
inner: jar,
store: make(map[string]map[string]map[string]*http.Cookie),
}, nil
}
func (j *debugJar) SetCookies(u *url.URL, cookies []*http.Cookie) {
j.inner.SetCookies(u, cookies)
j.mu.Lock()
defer j.mu.Unlock()
for _, c := range cookies {
if c == nil || c.Name == "" {
continue
}
domain := c.Domain
if domain == "" {
domain = u.Hostname()
}
path := c.Path
if path == "" {
path = "/"
}
if _, ok := j.store[domain]; !ok {
j.store[domain] = make(map[string]map[string]*http.Cookie)
}
if _, ok := j.store[domain][path]; !ok {
j.store[domain][path] = make(map[string]*http.Cookie)
}
j.store[domain][path][c.Name] = copyCookie(c)
}
}
func (j *debugJar) Cookies(u *url.URL) []*http.Cookie {
return j.inner.Cookies(u)
}
func (j *debugJar) Dump(w io.Writer) {
j.mu.RLock()
defer j.mu.RUnlock()
for domain, paths := range j.store {
fmt.Fprintf(w, "Domain: %s\n", domain)
for path, byName := range paths {
fmt.Fprintf(w, " Path: %s\n", path)
for _, c := range byName {
fmt.Fprintf(
w, " %s=%s; Expires=%v; Secure=%v; HttpOnly=%v; SameSite=%v\n",
c.Name, c.Value, c.Expires, c.Secure, c.HttpOnly, c.SameSite,
)
}
}
}
}
func copyCookie(c *http.Cookie) *http.Cookie {
cc := *c
return &cc
}
func newLoginHTTPClient(hostname string) (*http.Client, error) {
hc := &http.Client{
Transport: LoggingRoundTripper{Hostname: hostname},
}
hc.Jar, err = cookiejar.New(nil)
jar, err := newDebugJar()
if err != nil {
return "", fmt.Errorf("%s failed to create cookiejar : %w", hostname, err)
return nil, fmt.Errorf("%s failed to create cookiejar: %w", hostname, err)
}
hc.Jar = jar
return hc, nil
}
// doLoginURL visits the given login URL and returns the body as a string.
func doLoginURL(hostname string, loginURL *url.URL) (string, error) {
log.Printf("%s login url: %s\n", hostname, loginURL.String())
hc, err := newLoginHTTPClient(hostname)
if err != nil {
return "", err
}
body, _, err := doLoginURLWithClient(hostname, loginURL, hc, true)
if err != nil {
return "", err
}
return body, nil
}
// doLoginURLWithClient performs the login request using the provided HTTP client.
// When followRedirects is false, it will return the first redirect without following it.
func doLoginURLWithClient(hostname string, loginURL *url.URL, hc *http.Client, followRedirects bool) (
string,
*url.URL,
error,
) {
if hc == nil {
return "", nil, fmt.Errorf("%s http client is nil", hostname)
}
if loginURL == nil {
return "", nil, fmt.Errorf("%s login url is nil", hostname)
}
log.Printf("%s logging in with url: %s", hostname, loginURL.String())
ctx := context.Background()
req, _ := http.NewRequestWithContext(ctx, http.MethodGet, loginURL.String(), nil)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, loginURL.String(), nil)
if err != nil {
return "", nil, fmt.Errorf("%s failed to create http request: %w", hostname, err)
}
originalRedirect := hc.CheckRedirect
if !followRedirects {
hc.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
}
defer func() {
hc.CheckRedirect = originalRedirect
}()
resp, err := hc.Do(req)
if err != nil {
return "", fmt.Errorf("%s failed to send http request: %w", hostname, err)
return "", nil, fmt.Errorf("%s failed to send http request: %w", hostname, err)
}
log.Printf("cookies: %+v", hc.Jar.Cookies(loginURL))
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
log.Printf("body: %s", body)
return "", fmt.Errorf("%s response code of login request was %w", hostname, err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
log.Printf("%s failed to read response body: %s", hostname, err)
return "", nil, fmt.Errorf("%s failed to read response body: %w", hostname, err)
}
body := string(bodyBytes)
return "", fmt.Errorf("%s failed to read response body: %w", hostname, err)
var redirectURL *url.URL
if resp.StatusCode >= http.StatusMultipleChoices && resp.StatusCode < http.StatusBadRequest {
redirectURL, err = resp.Location()
if err != nil {
return body, nil, fmt.Errorf("%s failed to resolve redirect location: %w", hostname, err)
}
}
return string(body), nil
if followRedirects && resp.StatusCode != http.StatusOK {
log.Printf("body: %s", body)
return body, redirectURL, fmt.Errorf("%s unexpected status code %d", hostname, resp.StatusCode)
}
if resp.StatusCode >= http.StatusBadRequest {
log.Printf("body: %s", body)
return body, redirectURL, fmt.Errorf("%s unexpected status code %d", hostname, resp.StatusCode)
}
if hc.Jar != nil {
if jar, ok := hc.Jar.(*debugJar); ok {
jar.Dump(os.Stdout)
} else {
log.Printf("cookies: %+v", hc.Jar.Cookies(loginURL))
}
}
return body, redirectURL, nil
}
var errParseAuthPage = errors.New("failed to parse auth page")

View File

@@ -29,6 +29,7 @@ type TailscaleClient interface {
Login(loginServer, authKey string) error
LoginWithURL(loginServer string) (*url.URL, error)
Logout() error
Restart() error
Up() error
Down() error
IPs() ([]netip.Addr, error)

View File

@@ -555,6 +555,39 @@ func (t *TailscaleInContainer) Logout() error {
return t.waitForBackendState("NeedsLogin", integrationutil.PeerSyncTimeout())
}
// Restart restarts the Tailscale container using Docker API.
// This simulates a container restart (e.g., docker restart or Kubernetes pod restart).
// The container's entrypoint will re-execute, which typically includes running
// "tailscale up" with any auth keys stored in environment variables.
func (t *TailscaleInContainer) Restart() error {
if t.container == nil {
return fmt.Errorf("container not initialized")
}
// Use Docker API to restart the container
err := t.pool.Client.RestartContainer(t.container.Container.ID, 30)
if err != nil {
return fmt.Errorf("failed to restart container %s: %w", t.hostname, err)
}
// Wait for the container to be back up and tailscaled to be ready
// We use exponential backoff to poll until we can successfully execute a command
_, err = backoff.Retry(context.Background(), func() (struct{}, error) {
// Try to execute a simple command to verify the container is responsive
_, _, err := t.Execute([]string{"tailscale", "version"}, dockertestutil.ExecuteCommandTimeout(5*time.Second))
if err != nil {
return struct{}{}, fmt.Errorf("container not ready: %w", err)
}
return struct{}{}, nil
}, backoff.WithBackOff(backoff.NewExponentialBackOff()), backoff.WithMaxElapsedTime(30*time.Second))
if err != nil {
return fmt.Errorf("timeout waiting for container %s to restart and become ready: %w", t.hostname, err)
}
return nil
}
// Helper that runs `tailscale up` with no arguments.
func (t *TailscaleInContainer) Up() error {
command := []string{

View File

@@ -104,7 +104,7 @@ extra:
- icon: fontawesome/brands/discord
link: https://discord.gg/c84AZQhmpx
headscale:
version: 0.27.0
version: 0.27.1
# Extensions
markdown_extensions:

View File

@@ -55,6 +55,13 @@ service HeadscaleService {
};
}
rpc DeletePreAuthKey(DeletePreAuthKeyRequest)
returns (DeletePreAuthKeyResponse) {
option (google.api.http) = {
delete : "/api/v1/preauthkey"
};
}
rpc ListPreAuthKeys(ListPreAuthKeysRequest)
returns (ListPreAuthKeysResponse) {
option (google.api.http) = {

View File

@@ -34,6 +34,13 @@ message ExpirePreAuthKeyRequest {
message ExpirePreAuthKeyResponse {}
message DeletePreAuthKeyRequest {
uint64 user = 1;
string key = 2;
}
message DeletePreAuthKeyResponse {}
message ListPreAuthKeysRequest { uint64 user = 1; }
message ListPreAuthKeysResponse { repeated PreAuthKey pre_auth_keys = 1; }